Test Report: KVM_Linux_containerd 18063

                    
                      9a5d81419c51a6c3c4fef58cf8d1de8416716248:2024-02-29:33343
                    
                

Test fail (11/316)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (291.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-671566 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0229 01:22:12.442201  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:24:28.597002  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:24:56.283941  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:26:14.620399  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.626321  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.636596  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.656863  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.697120  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.777453  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.937856  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:15.258496  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:15.899469  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:17.179963  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:19.741127  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:24.862028  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:35.103138  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:55.584000  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-671566 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: exit status 109 (4m50.984566235s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-671566] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node ingress-addon-legacy-671566 in cluster ingress-addon-legacy-671566
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 01:26:54 ingress-addon-legacy-671566 kubelet[6134]: F0229 01:26:54.305038    6134 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 01:26:55 ingress-addon-legacy-671566 kubelet[6161]: F0229 01:26:55.554030    6161 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 01:26:56 ingress-addon-legacy-671566 kubelet[6187]: F0229 01:26:56.762409    6187 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:22:11.898503  325441 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:22:11.898776  325441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:22:11.898787  325441 out.go:304] Setting ErrFile to fd 2...
	I0229 01:22:11.898794  325441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:22:11.899020  325441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:22:11.899659  325441 out.go:298] Setting JSON to false
	I0229 01:22:11.900691  325441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3876,"bootTime":1709165856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:22:11.900760  325441 start.go:139] virtualization: kvm guest
	I0229 01:22:11.902731  325441 out.go:177] * [ingress-addon-legacy-671566] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:22:11.904188  325441 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:22:11.905327  325441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:22:11.904148  325441 notify.go:220] Checking for updates...
	I0229 01:22:11.907415  325441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 01:22:11.908645  325441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:22:11.909769  325441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:22:11.910844  325441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:22:11.912164  325441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:22:11.945211  325441 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 01:22:11.946186  325441 start.go:299] selected driver: kvm2
	I0229 01:22:11.946198  325441 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:22:11.946211  325441 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:22:11.946937  325441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:22:11.947028  325441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:22:11.961259  325441 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:22:11.961307  325441 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:22:11.961532  325441 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:22:11.961615  325441 cni.go:84] Creating CNI manager for ""
	I0229 01:22:11.961632  325441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 01:22:11.961647  325441 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:22:11.961659  325441 start_flags.go:323] config:
	{Name:ingress-addon-legacy-671566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:22:11.961846  325441 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:22:11.963400  325441 out.go:177] * Starting control plane node ingress-addon-legacy-671566 in cluster ingress-addon-legacy-671566
	I0229 01:22:11.964534  325441 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0229 01:22:12.463360  325441 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
	I0229 01:22:12.463393  325441 cache.go:56] Caching tarball of preloaded images
	I0229 01:22:12.463594  325441 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0229 01:22:12.466050  325441 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 01:22:12.467253  325441 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:22:12.577567  325441 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4?checksum=md5:b585eebe982180189fed21f0bd283cca -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
	I0229 01:22:32.878632  325441 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:22:32.878729  325441 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:22:33.948296  325441 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0229 01:22:33.948632  325441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/config.json ...
	I0229 01:22:33.948663  325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/config.json: {Name:mk9b97164cd8f4f8241d6ee97e5ecd8f0f0f5077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:22:33.948856  325441 start.go:365] acquiring machines lock for ingress-addon-legacy-671566: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:22:33.948889  325441 start.go:369] acquired machines lock for "ingress-addon-legacy-671566" in 18.053µs
	I0229 01:22:33.948906  325441 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-671566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 01:22:33.948983  325441 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 01:22:33.951664  325441 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 01:22:33.951825  325441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:22:33.951865  325441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:22:33.967099  325441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44677
	I0229 01:22:33.967562  325441 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:22:33.968109  325441 main.go:141] libmachine: Using API Version  1
	I0229 01:22:33.968131  325441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:22:33.968476  325441 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:22:33.968697  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
	I0229 01:22:33.968875  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:33.969010  325441 start.go:159] libmachine.API.Create for "ingress-addon-legacy-671566" (driver="kvm2")
	I0229 01:22:33.969035  325441 client.go:168] LocalClient.Create starting
	I0229 01:22:33.969070  325441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem
	I0229 01:22:33.969110  325441 main.go:141] libmachine: Decoding PEM data...
	I0229 01:22:33.969133  325441 main.go:141] libmachine: Parsing certificate...
	I0229 01:22:33.969211  325441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem
	I0229 01:22:33.969243  325441 main.go:141] libmachine: Decoding PEM data...
	I0229 01:22:33.969263  325441 main.go:141] libmachine: Parsing certificate...
	I0229 01:22:33.969289  325441 main.go:141] libmachine: Running pre-create checks...
	I0229 01:22:33.969304  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .PreCreateCheck
	I0229 01:22:33.969642  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetConfigRaw
	I0229 01:22:33.969983  325441 main.go:141] libmachine: Creating machine...
	I0229 01:22:33.969999  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Create
	I0229 01:22:33.970139  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Creating KVM machine...
	I0229 01:22:33.971316  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found existing default KVM network
	I0229 01:22:33.972148  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:33.972013  325520 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d960}
	I0229 01:22:33.977048  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | trying to create private KVM network mk-ingress-addon-legacy-671566 192.168.39.0/24...
	I0229 01:22:34.040799  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | private KVM network mk-ingress-addon-legacy-671566 192.168.39.0/24 created
	I0229 01:22:34.040845  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.040768  325520 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:22:34.040860  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting up store path in /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566 ...
	I0229 01:22:34.040878  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Building disk image from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:22:34.041023  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Downloading /home/jenkins/minikube-integration/18063-309085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:22:34.289385  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.289248  325520 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa...
	I0229 01:22:34.490112  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.489954  325520 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/ingress-addon-legacy-671566.rawdisk...
	I0229 01:22:34.490159  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Writing magic tar header
	I0229 01:22:34.490203  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Writing SSH key tar header
	I0229 01:22:34.490221  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.490071  325520 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566 ...
	I0229 01:22:34.490240  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566
	I0229 01:22:34.490251  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566 (perms=drwx------)
	I0229 01:22:34.490261  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines
	I0229 01:22:34.490274  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines (perms=drwxr-xr-x)
	I0229 01:22:34.490294  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube (perms=drwxr-xr-x)
	I0229 01:22:34.490307  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085 (perms=drwxrwxr-x)
	I0229 01:22:34.490316  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:22:34.490331  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085
	I0229 01:22:34.490340  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 01:22:34.490348  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins
	I0229 01:22:34.490357  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 01:22:34.490373  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 01:22:34.490386  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home
	I0229 01:22:34.490398  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Creating domain...
	I0229 01:22:34.490411  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Skipping /home - not owner
	I0229 01:22:34.491421  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) define libvirt domain using xml: 
	I0229 01:22:34.491449  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <domain type='kvm'>
	I0229 01:22:34.491462  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   <name>ingress-addon-legacy-671566</name>
	I0229 01:22:34.491473  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   <memory unit='MiB'>4096</memory>
	I0229 01:22:34.491490  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   <vcpu>2</vcpu>
	I0229 01:22:34.491501  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   <features>
	I0229 01:22:34.491509  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <acpi/>
	I0229 01:22:34.491520  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <apic/>
	I0229 01:22:34.491528  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <pae/>
	I0229 01:22:34.491536  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     
	I0229 01:22:34.491542  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   </features>
	I0229 01:22:34.491550  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   <cpu mode='host-passthrough'>
	I0229 01:22:34.491558  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   
	I0229 01:22:34.491564  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   </cpu>
	I0229 01:22:34.491570  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   <os>
	I0229 01:22:34.491579  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <type>hvm</type>
	I0229 01:22:34.491599  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <boot dev='cdrom'/>
	I0229 01:22:34.491613  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <boot dev='hd'/>
	I0229 01:22:34.491620  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <bootmenu enable='no'/>
	I0229 01:22:34.491629  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   </os>
	I0229 01:22:34.491637  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   <devices>
	I0229 01:22:34.491647  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <disk type='file' device='cdrom'>
	I0229 01:22:34.491667  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/boot2docker.iso'/>
	I0229 01:22:34.491685  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <target dev='hdc' bus='scsi'/>
	I0229 01:22:34.491692  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <readonly/>
	I0229 01:22:34.491704  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     </disk>
	I0229 01:22:34.491714  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <disk type='file' device='disk'>
	I0229 01:22:34.491724  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 01:22:34.491738  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/ingress-addon-legacy-671566.rawdisk'/>
	I0229 01:22:34.491753  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <target dev='hda' bus='virtio'/>
	I0229 01:22:34.491763  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     </disk>
	I0229 01:22:34.491776  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <interface type='network'>
	I0229 01:22:34.491791  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <source network='mk-ingress-addon-legacy-671566'/>
	I0229 01:22:34.491804  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <model type='virtio'/>
	I0229 01:22:34.491811  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     </interface>
	I0229 01:22:34.491821  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <interface type='network'>
	I0229 01:22:34.491832  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <source network='default'/>
	I0229 01:22:34.491845  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <model type='virtio'/>
	I0229 01:22:34.491859  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     </interface>
	I0229 01:22:34.491872  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <serial type='pty'>
	I0229 01:22:34.491882  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <target port='0'/>
	I0229 01:22:34.491890  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     </serial>
	I0229 01:22:34.491897  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <console type='pty'>
	I0229 01:22:34.491906  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <target type='serial' port='0'/>
	I0229 01:22:34.491917  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     </console>
	I0229 01:22:34.491929  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     <rng model='virtio'>
	I0229 01:22:34.491943  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)       <backend model='random'>/dev/random</backend>
	I0229 01:22:34.491958  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     </rng>
	I0229 01:22:34.491969  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     
	I0229 01:22:34.491983  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)     
	I0229 01:22:34.491990  325441 main.go:141] libmachine: (ingress-addon-legacy-671566)   </devices>
	I0229 01:22:34.492004  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </domain>
	I0229 01:22:34.492041  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) 
	I0229 01:22:34.495930  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:7e:da:21 in network default
	I0229 01:22:34.496528  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Ensuring networks are active...
	I0229 01:22:34.496559  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:34.497242  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Ensuring network default is active
	I0229 01:22:34.497536  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Ensuring network mk-ingress-addon-legacy-671566 is active
	I0229 01:22:34.498058  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Getting domain xml...
	I0229 01:22:34.498694  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Creating domain...
	I0229 01:22:35.671709  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Waiting to get IP...
	I0229 01:22:35.672637  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:35.673049  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:35.673093  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:35.673015  325520 retry.go:31] will retry after 285.941832ms: waiting for machine to come up
	I0229 01:22:35.960467  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:35.960897  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:35.960928  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:35.960863  325520 retry.go:31] will retry after 243.277464ms: waiting for machine to come up
	I0229 01:22:36.205244  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:36.205643  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:36.205671  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:36.205592  325520 retry.go:31] will retry after 418.531661ms: waiting for machine to come up
	I0229 01:22:36.626173  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:36.626689  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:36.626718  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:36.626638  325520 retry.go:31] will retry after 468.757069ms: waiting for machine to come up
	I0229 01:22:37.097171  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:37.097625  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:37.097656  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:37.097553  325520 retry.go:31] will retry after 516.742124ms: waiting for machine to come up
	I0229 01:22:37.616345  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:37.616783  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:37.616807  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:37.616724  325520 retry.go:31] will retry after 840.859173ms: waiting for machine to come up
	I0229 01:22:38.458829  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:38.459252  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:38.459290  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:38.459183  325520 retry.go:31] will retry after 1.160952675s: waiting for machine to come up
	I0229 01:22:39.621904  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:39.622419  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:39.622447  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:39.622367  325520 retry.go:31] will retry after 981.893154ms: waiting for machine to come up
	I0229 01:22:40.605788  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:40.606261  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:40.606297  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:40.606226  325520 retry.go:31] will retry after 1.784036247s: waiting for machine to come up
	I0229 01:22:42.393173  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:42.393618  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:42.393646  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:42.393566  325520 retry.go:31] will retry after 1.544306192s: waiting for machine to come up
	I0229 01:22:43.940353  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:43.940812  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:43.940848  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:43.940763  325520 retry.go:31] will retry after 2.046404556s: waiting for machine to come up
	I0229 01:22:45.988347  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:45.988784  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:45.988803  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:45.988741  325520 retry.go:31] will retry after 2.82311181s: waiting for machine to come up
	I0229 01:22:48.815601  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:48.815977  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:48.816003  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:48.815935  325520 retry.go:31] will retry after 3.058609083s: waiting for machine to come up
	I0229 01:22:51.878438  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:51.878941  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
	I0229 01:22:51.878972  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:51.878906  325520 retry.go:31] will retry after 3.449863463s: waiting for machine to come up
	I0229 01:22:55.330867  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.331353  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Found IP for machine: 192.168.39.248
	I0229 01:22:55.331380  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Reserving static IP address...
	I0229 01:22:55.331397  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has current primary IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.331756  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-671566", mac: "52:54:00:3b:c8:ec", ip: "192.168.39.248"} in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.401104  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Getting to WaitForSSH function...
	I0229 01:22:55.401138  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Reserved static IP address: 192.168.39.248
	I0229 01:22:55.401153  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Waiting for SSH to be available...
	I0229 01:22:55.403683  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.404141  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:55.404173  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.404390  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Using SSH client type: external
	I0229 01:22:55.404431  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa (-rw-------)
	I0229 01:22:55.404477  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:22:55.404498  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | About to run SSH command:
	I0229 01:22:55.404514  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | exit 0
	I0229 01:22:55.529946  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | SSH cmd err, output: <nil>: 
	I0229 01:22:55.530187  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) KVM machine creation complete!
	I0229 01:22:55.530509  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetConfigRaw
	I0229 01:22:55.531058  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:55.531263  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:55.531417  325441 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 01:22:55.531434  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetState
	I0229 01:22:55.532486  325441 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 01:22:55.532502  325441 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 01:22:55.532507  325441 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 01:22:55.532513  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:55.534723  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.535084  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:55.535113  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.535257  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:55.535466  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.535594  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.535697  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:55.535838  325441 main.go:141] libmachine: Using SSH client type: native
	I0229 01:22:55.536034  325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0229 01:22:55.536047  325441 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 01:22:55.637123  325441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:22:55.637142  325441 main.go:141] libmachine: Detecting the provisioner...
	I0229 01:22:55.637150  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:55.639921  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.640193  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:55.640218  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.640354  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:55.640525  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.640719  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.640899  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:55.641071  325441 main.go:141] libmachine: Using SSH client type: native
	I0229 01:22:55.641251  325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0229 01:22:55.641262  325441 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 01:22:55.742934  325441 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 01:22:55.743031  325441 main.go:141] libmachine: found compatible host: buildroot
	I0229 01:22:55.743047  325441 main.go:141] libmachine: Provisioning with buildroot...
	I0229 01:22:55.743059  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
	I0229 01:22:55.743311  325441 buildroot.go:166] provisioning hostname "ingress-addon-legacy-671566"
	I0229 01:22:55.743337  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
	I0229 01:22:55.743547  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:55.746182  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.746628  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:55.746664  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.746774  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:55.746943  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.747063  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.747218  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:55.747358  325441 main.go:141] libmachine: Using SSH client type: native
	I0229 01:22:55.747557  325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0229 01:22:55.747576  325441 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-671566 && echo "ingress-addon-legacy-671566" | sudo tee /etc/hostname
	I0229 01:22:55.866916  325441 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-671566
	
	I0229 01:22:55.866940  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:55.869460  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.869773  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:55.869802  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.869961  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:55.870143  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.870309  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:55.870483  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:55.870650  325441 main.go:141] libmachine: Using SSH client type: native
	I0229 01:22:55.870836  325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0229 01:22:55.870863  325441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-671566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-671566/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-671566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:22:55.985468  325441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:22:55.985496  325441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 01:22:55.985556  325441 buildroot.go:174] setting up certificates
	I0229 01:22:55.985571  325441 provision.go:83] configureAuth start
	I0229 01:22:55.985587  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
	I0229 01:22:55.985820  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
	I0229 01:22:55.987970  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.988276  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:55.988303  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.988522  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:55.990648  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.990957  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:55.990983  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:55.991126  325441 provision.go:138] copyHostCerts
	I0229 01:22:55.991156  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 01:22:55.991187  325441 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 01:22:55.991211  325441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 01:22:55.991285  325441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 01:22:55.991384  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 01:22:55.991409  325441 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 01:22:55.991418  325441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 01:22:55.991453  325441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 01:22:55.991511  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 01:22:55.991530  325441 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 01:22:55.991539  325441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 01:22:55.991573  325441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 01:22:55.991633  325441 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-671566 san=[192.168.39.248 192.168.39.248 localhost 127.0.0.1 minikube ingress-addon-legacy-671566]
	I0229 01:22:56.081838  325441 provision.go:172] copyRemoteCerts
	I0229 01:22:56.081889  325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:22:56.081908  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:56.083810  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.084035  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.084061  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.084175  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:56.084354  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:56.084511  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:56.084616  325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:22:56.164896  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 01:22:56.164960  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 01:22:56.194275  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 01:22:56.194314  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 01:22:56.222729  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 01:22:56.222777  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:22:56.251228  325441 provision.go:86] duration metric: configureAuth took 265.646641ms
	I0229 01:22:56.251251  325441 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:22:56.251418  325441 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 01:22:56.251447  325441 main.go:141] libmachine: Checking connection to Docker...
	I0229 01:22:56.251464  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetURL
	I0229 01:22:56.252480  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Using libvirt version 6000000
	I0229 01:22:56.254238  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.254550  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.254583  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.254702  325441 main.go:141] libmachine: Docker is up and running!
	I0229 01:22:56.254716  325441 main.go:141] libmachine: Reticulating splines...
	I0229 01:22:56.254725  325441 client.go:171] LocalClient.Create took 22.285681229s
	I0229 01:22:56.254750  325441 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-671566" took 22.285740098s
	I0229 01:22:56.254764  325441 start.go:300] post-start starting for "ingress-addon-legacy-671566" (driver="kvm2")
	I0229 01:22:56.254778  325441 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:22:56.254801  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:56.255023  325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:22:56.255045  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:56.256772  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.257034  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.257059  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.257175  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:56.257358  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:56.257510  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:56.257629  325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:22:56.335980  325441 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:22:56.340541  325441 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:22:56.340567  325441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 01:22:56.340622  325441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 01:22:56.340721  325441 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 01:22:56.340735  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> /etc/ssl/certs/3163362.pem
	I0229 01:22:56.340851  325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:22:56.350377  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 01:22:56.375480  325441 start.go:303] post-start completed in 120.702833ms
	I0229 01:22:56.375525  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetConfigRaw
	I0229 01:22:56.376015  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
	I0229 01:22:56.378311  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.378615  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.378646  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.378848  325441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/config.json ...
	I0229 01:22:56.379006  325441 start.go:128] duration metric: createHost completed in 22.43001345s
	I0229 01:22:56.379027  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:56.381034  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.381348  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.381384  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.381472  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:56.381671  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:56.381819  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:56.381937  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:56.382073  325441 main.go:141] libmachine: Using SSH client type: native
	I0229 01:22:56.382296  325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0229 01:22:56.382309  325441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 01:22:56.482848  325441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709169776.449188134
	
	I0229 01:22:56.482874  325441 fix.go:206] guest clock: 1709169776.449188134
	I0229 01:22:56.482884  325441 fix.go:219] Guest: 2024-02-29 01:22:56.449188134 +0000 UTC Remote: 2024-02-29 01:22:56.379016613 +0000 UTC m=+44.530393722 (delta=70.171521ms)
	I0229 01:22:56.482910  325441 fix.go:190] guest clock delta is within tolerance: 70.171521ms
	I0229 01:22:56.482917  325441 start.go:83] releasing machines lock for "ingress-addon-legacy-671566", held for 22.534018745s
	I0229 01:22:56.482942  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:56.483195  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
	I0229 01:22:56.485724  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.486048  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.486094  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.486267  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:56.486801  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:56.486954  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:22:56.487048  325441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:22:56.487090  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:56.487149  325441 ssh_runner.go:195] Run: cat /version.json
	I0229 01:22:56.487176  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:22:56.489465  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.489672  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.489812  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.489841  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.489964  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:56.490053  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:56.490099  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:56.490142  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:56.490222  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:22:56.490294  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:56.490376  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:22:56.490440  325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:22:56.490483  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:22:56.490581  325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:22:56.587021  325441 ssh_runner.go:195] Run: systemctl --version
	I0229 01:22:56.593100  325441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:22:56.599237  325441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:22:56.599307  325441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:22:56.622660  325441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:22:56.622688  325441 start.go:475] detecting cgroup driver to use...
	I0229 01:22:56.622771  325441 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:22:56.651217  325441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:22:56.664580  325441 docker.go:217] disabling cri-docker service (if available) ...
	I0229 01:22:56.664640  325441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 01:22:56.678244  325441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 01:22:56.691547  325441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 01:22:56.804511  325441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 01:22:56.963930  325441 docker.go:233] disabling docker service ...
	I0229 01:22:56.964005  325441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 01:22:56.980165  325441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 01:22:56.992964  325441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 01:22:57.117247  325441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 01:22:57.245005  325441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 01:22:57.260503  325441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:22:57.279987  325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 01:22:57.291963  325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:22:57.302572  325441 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:22:57.302635  325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:22:57.312953  325441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:22:57.323230  325441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:22:57.333501  325441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:22:57.343860  325441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:22:57.354662  325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:22:57.365194  325441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:22:57.374535  325441 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 01:22:57.374597  325441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 01:22:57.387947  325441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:22:57.397423  325441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:22:57.510128  325441 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:22:57.538270  325441 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 01:22:57.538363  325441 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 01:22:57.543171  325441 retry.go:31] will retry after 703.505308ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 01:22:58.247068  325441 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 01:22:58.252856  325441 start.go:543] Will wait 60s for crictl version
	I0229 01:22:58.252908  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:22:58.257073  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:22:58.293694  325441 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 01:22:58.293769  325441 ssh_runner.go:195] Run: containerd --version
	I0229 01:22:58.323549  325441 ssh_runner.go:195] Run: containerd --version
	I0229 01:22:58.353761  325441 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
	I0229 01:22:58.355098  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
	I0229 01:22:58.357661  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:58.358013  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:22:58.358032  325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:22:58.358259  325441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 01:22:58.362744  325441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:22:58.375869  325441 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0229 01:22:58.375920  325441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:22:58.407351  325441 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 01:22:58.407414  325441 ssh_runner.go:195] Run: which lz4
	I0229 01:22:58.411355  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 01:22:58.411417  325441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 01:22:58.415875  325441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:22:58.415910  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (494845061 bytes)
	I0229 01:23:00.182242  325441 containerd.go:548] Took 1.770827 seconds to copy over tarball
	I0229 01:23:00.182315  325441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:23:03.140719  325441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.958371689s)
	I0229 01:23:03.140751  325441 containerd.go:555] Took 2.958481 seconds to extract the tarball
	I0229 01:23:03.140761  325441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:23:03.189078  325441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:23:03.307525  325441 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:23:03.339197  325441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:23:03.386958  325441 retry.go:31] will retry after 361.248235ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T01:23:03Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 01:23:03.748510  325441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:23:03.792799  325441 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 01:23:03.792825  325441 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 01:23:03.792928  325441 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:23:03.792963  325441 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 01:23:03.792971  325441 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:23:03.792961  325441 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 01:23:03.792891  325441 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:23:03.792943  325441 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:23:03.792919  325441 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:23:03.792998  325441 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:23:03.794209  325441 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 01:23:03.794219  325441 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:23:03.794213  325441 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:23:03.794305  325441 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 01:23:03.794337  325441 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:23:03.794404  325441 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:23:03.794431  325441 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:23:03.794425  325441 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:23:03.952517  325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346"
	I0229 01:23:03.952583  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:03.967428  325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0229 01:23:03.967481  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:04.048733  325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f"
	I0229 01:23:04.048821  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:04.096864  325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290"
	I0229 01:23:04.096966  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:04.104045  325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1"
	I0229 01:23:04.104108  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:04.108091  325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5"
	I0229 01:23:04.108156  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:04.132898  325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba"
	I0229 01:23:04.132980  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:04.308449  325441 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 01:23:04.308502  325441 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:23:04.308552  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:23:04.501995  325441 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 01:23:04.502056  325441 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0229 01:23:04.502136  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:23:04.645601  325441 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 01:23:04.645683  325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 01:23:04.869435  325441 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 01:23:04.869493  325441 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:23:04.869552  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:23:05.132347  325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.035345165s)
	I0229 01:23:05.132436  325441 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 01:23:05.132479  325441 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:23:05.132542  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:23:05.164353  325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.060212589s)
	I0229 01:23:05.164439  325441 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 01:23:05.164489  325441 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:23:05.164544  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:23:05.164937  325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.056752568s)
	I0229 01:23:05.165010  325441 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 01:23:05.165048  325441 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 01:23:05.165106  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:23:05.269069  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:23:05.269134  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0229 01:23:05.271191  325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.13817999s)
	I0229 01:23:05.271276  325441 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 01:23:05.271315  325441 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:23:05.271349  325441 ssh_runner.go:195] Run: which crictl
	I0229 01:23:05.339705  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0229 01:23:05.339757  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:23:05.339802  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:23:05.339858  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0229 01:23:05.447015  325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 01:23:05.447098  325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:23:05.447600  325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0229 01:23:05.473799  325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 01:23:05.473893  325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0229 01:23:05.473924  325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 01:23:05.474009  325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0229 01:23:05.500925  325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 01:23:05.500976  325441 cache_images.go:92] LoadImages completed in 1.70813875s
	W0229 01:23:05.501048  325441 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0229 01:23:05.501096  325441 ssh_runner.go:195] Run: sudo crictl info
	I0229 01:23:05.536285  325441 cni.go:84] Creating CNI manager for ""
	I0229 01:23:05.536308  325441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 01:23:05.536333  325441 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:23:05.536357  325441 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-671566 NodeName:ingress-addon-legacy-671566 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 01:23:05.536538  325441 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-671566"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:23:05.536633  325441 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-671566 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:23:05.536685  325441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 01:23:05.547625  325441 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:23:05.547684  325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:23:05.557771  325441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0229 01:23:05.575886  325441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 01:23:05.594000  325441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2137 bytes)
	I0229 01:23:05.612719  325441 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I0229 01:23:05.617082  325441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:23:05.630507  325441 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566 for IP: 192.168.39.248
	I0229 01:23:05.630574  325441 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:23:05.630747  325441 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 01:23:05.630812  325441 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 01:23:05.630870  325441 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.key
	I0229 01:23:05.630887  325441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.crt with IP's: []
	I0229 01:23:05.710958  325441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.crt ...
	I0229 01:23:05.710992  325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.crt: {Name:mkfa226b1bdfa793718014ec2b328d9ffdcc4cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:23:05.711174  325441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.key ...
	I0229 01:23:05.711204  325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.key: {Name:mkea2f79a37bc3b329676ae862b60640c1b92162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:23:05.711305  325441 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70
	I0229 01:23:05.711330  325441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70 with IP's: [192.168.39.248 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 01:23:05.797945  325441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70 ...
	I0229 01:23:05.797977  325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70: {Name:mkbf8ee2698e7d138cb6c86bf8794cd65ae8565a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:23:05.798161  325441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70 ...
	I0229 01:23:05.798179  325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70: {Name:mk75ced3308e182f8a25a9ef5be4a614f1a09603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:23:05.798269  325441 certs.go:337] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70 -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt
	I0229 01:23:05.798380  325441 certs.go:341] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70 -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key
	I0229 01:23:05.798456  325441 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key
	I0229 01:23:05.798476  325441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt with IP's: []
	I0229 01:23:06.086294  325441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt ...
	I0229 01:23:06.086327  325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt: {Name:mk42c3f9895eb9d10a1b3cdbfc85c614b4e5f116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:23:06.086503  325441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key ...
	I0229 01:23:06.086521  325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key: {Name:mk07322cbdd7c3244d9d7b10ccbd63e80f2c1f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:23:06.086612  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 01:23:06.086648  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 01:23:06.086676  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 01:23:06.086699  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 01:23:06.086716  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 01:23:06.086730  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 01:23:06.086744  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 01:23:06.086761  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 01:23:06.086833  325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 01:23:06.086891  325441 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 01:23:06.086910  325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 01:23:06.086950  325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 01:23:06.086985  325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:23:06.087013  325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 01:23:06.087072  325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 01:23:06.087116  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> /usr/share/ca-certificates/3163362.pem
	I0229 01:23:06.087137  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:23:06.087154  325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem -> /usr/share/ca-certificates/316336.pem
	I0229 01:23:06.087779  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:23:06.116460  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 01:23:06.142592  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:23:06.168422  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:23:06.193655  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:23:06.219339  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 01:23:06.244982  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:23:06.270173  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:23:06.295738  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 01:23:06.321222  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:23:06.346611  325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 01:23:06.371502  325441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:23:06.388956  325441 ssh_runner.go:195] Run: openssl version
	I0229 01:23:06.394961  325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 01:23:06.405800  325441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 01:23:06.410970  325441 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 01:23:06.411018  325441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 01:23:06.417130  325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:23:06.428978  325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:23:06.441159  325441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:23:06.445867  325441 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:23:06.445932  325441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:23:06.451762  325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:23:06.462833  325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 01:23:06.473740  325441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 01:23:06.478469  325441 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 01:23:06.478525  325441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 01:23:06.484277  325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 01:23:06.495008  325441 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:23:06.499296  325441 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:23:06.499341  325441 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-671566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:23:06.499416  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 01:23:06.499450  325441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:23:06.536391  325441 cri.go:89] found id: ""
	I0229 01:23:06.536446  325441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:23:06.546277  325441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:23:06.555993  325441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:23:06.565608  325441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:23:06.565649  325441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:23:06.620686  325441 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 01:23:06.621022  325441 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:23:06.765007  325441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:23:06.765164  325441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:23:06.765363  325441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:23:06.966398  325441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:23:06.966966  325441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:23:06.967036  325441 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:23:07.091209  325441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:23:07.092973  325441 out.go:204]   - Generating certificates and keys ...
	I0229 01:23:07.093072  325441 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:23:07.093149  325441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:23:07.617656  325441 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 01:23:07.772191  325441 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 01:23:07.925456  325441 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 01:23:08.304209  325441 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 01:23:08.516856  325441 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 01:23:08.517042  325441 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0229 01:23:08.731858  325441 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 01:23:08.731984  325441 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0229 01:23:08.995142  325441 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 01:23:09.466158  325441 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 01:23:09.804992  325441 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 01:23:09.805062  325441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:23:10.060213  325441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:23:10.250231  325441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:23:10.488582  325441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:23:10.697438  325441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:23:10.698934  325441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:23:10.701519  325441 out.go:204]   - Booting up control plane ...
	I0229 01:23:10.701633  325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:23:10.715870  325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:23:10.716024  325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:23:10.716154  325441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:23:10.720126  325441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:23:50.713178  325441 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:23:50.714172  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:23:50.714416  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:23:55.715121  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:23:55.715394  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:24:05.715374  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:24:05.715640  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:24:25.715438  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:24:25.715623  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:25:05.716749  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:25:05.716987  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:25:05.716997  325441 kubeadm.go:322] 
	I0229 01:25:05.717040  325441 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 01:25:05.717100  325441 kubeadm.go:322] 		timed out waiting for the condition
	I0229 01:25:05.717114  325441 kubeadm.go:322] 
	I0229 01:25:05.717161  325441 kubeadm.go:322] 	This error is likely caused by:
	I0229 01:25:05.717235  325441 kubeadm.go:322] 		- The kubelet is not running
	I0229 01:25:05.717377  325441 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:25:05.717387  325441 kubeadm.go:322] 
	I0229 01:25:05.717511  325441 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:25:05.717569  325441 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 01:25:05.717616  325441 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 01:25:05.717635  325441 kubeadm.go:322] 
	I0229 01:25:05.717924  325441 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:25:05.718091  325441 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 01:25:05.718114  325441 kubeadm.go:322] 
	I0229 01:25:05.718246  325441 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 01:25:05.718386  325441 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0229 01:25:05.718497  325441 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 01:25:05.718618  325441 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0229 01:25:05.718655  325441 kubeadm.go:322] 
	I0229 01:25:05.718892  325441 kubeadm.go:322] W0229 01:23:06.601612     835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 01:25:05.719046  325441 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:25:05.719175  325441 kubeadm.go:322] W0229 01:23:10.697008     835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:25:05.719289  325441 kubeadm.go:322] W0229 01:23:10.697935     835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:25:05.719380  325441 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:25:05.719480  325441 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 01:25:05.719682  325441 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:23:06.601612     835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:23:10.697008     835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:23:10.697935     835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:23:06.601612     835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:23:10.697008     835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:23:10.697935     835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:25:05.719737  325441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 01:25:06.173667  325441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:25:06.190163  325441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:25:06.200567  325441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:25:06.200613  325441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:25:06.257418  325441 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 01:25:06.257656  325441 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:25:06.398965  325441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:25:06.399103  325441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:25:06.399211  325441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:25:06.609973  325441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:25:06.610965  325441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:25:06.611019  325441 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:25:06.746701  325441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:25:06.748797  325441 out.go:204]   - Generating certificates and keys ...
	I0229 01:25:06.748893  325441 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:25:06.749012  325441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:25:06.749147  325441 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:25:06.749261  325441 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:25:06.749357  325441 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:25:06.749441  325441 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:25:06.749542  325441 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:25:06.749629  325441 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:25:06.749755  325441 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:25:06.749871  325441 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:25:06.749926  325441 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:25:06.750025  325441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:25:06.930317  325441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:25:07.025823  325441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:25:07.129158  325441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:25:07.264686  325441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:25:07.265268  325441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:25:07.266936  325441 out.go:204]   - Booting up control plane ...
	I0229 01:25:07.267080  325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:25:07.278268  325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:25:07.281411  325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:25:07.282723  325441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:25:07.285793  325441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:25:47.289469  325441 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:25:47.290069  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:25:47.290334  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:25:52.291353  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:25:52.291559  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:26:02.292655  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:26:02.292894  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:26:22.291778  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:26:22.292015  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:27:02.291095  325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:27:02.291349  325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:27:02.291358  325441 kubeadm.go:322] 
	I0229 01:27:02.291432  325441 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 01:27:02.291531  325441 kubeadm.go:322] 		timed out waiting for the condition
	I0229 01:27:02.291548  325441 kubeadm.go:322] 
	I0229 01:27:02.291578  325441 kubeadm.go:322] 	This error is likely caused by:
	I0229 01:27:02.291631  325441 kubeadm.go:322] 		- The kubelet is not running
	I0229 01:27:02.291771  325441 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:27:02.291780  325441 kubeadm.go:322] 
	I0229 01:27:02.291906  325441 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:27:02.291953  325441 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 01:27:02.291984  325441 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 01:27:02.291992  325441 kubeadm.go:322] 
	I0229 01:27:02.292127  325441 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:27:02.292234  325441 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 01:27:02.292251  325441 kubeadm.go:322] 
	I0229 01:27:02.292372  325441 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 01:27:02.292508  325441 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0229 01:27:02.292610  325441 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 01:27:02.292721  325441 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0229 01:27:02.292732  325441 kubeadm.go:322] 
	I0229 01:27:02.293199  325441 kubeadm.go:322] W0229 01:25:06.251317    3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 01:27:02.293339  325441 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:27:02.293456  325441 kubeadm.go:322] W0229 01:25:07.272412    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:27:02.293623  325441 kubeadm.go:322] W0229 01:25:07.275643    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:27:02.293732  325441 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:27:02.293828  325441 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:27:02.293927  325441 kubeadm.go:406] StartCluster complete in 3m55.794585648s
	I0229 01:27:02.294035  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 01:27:02.294124  325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 01:27:02.342085  325441 cri.go:89] found id: ""
	I0229 01:27:02.342112  325441 logs.go:276] 0 containers: []
	W0229 01:27:02.342123  325441 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:27:02.342133  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 01:27:02.342200  325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 01:27:02.377552  325441 cri.go:89] found id: ""
	I0229 01:27:02.377581  325441 logs.go:276] 0 containers: []
	W0229 01:27:02.377592  325441 logs.go:278] No container was found matching "etcd"
	I0229 01:27:02.377600  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 01:27:02.377671  325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 01:27:02.411797  325441 cri.go:89] found id: ""
	I0229 01:27:02.411818  325441 logs.go:276] 0 containers: []
	W0229 01:27:02.411825  325441 logs.go:278] No container was found matching "coredns"
	I0229 01:27:02.411831  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 01:27:02.411877  325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 01:27:02.442887  325441 cri.go:89] found id: ""
	I0229 01:27:02.442912  325441 logs.go:276] 0 containers: []
	W0229 01:27:02.442922  325441 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:27:02.442928  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 01:27:02.442998  325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 01:27:02.503577  325441 cri.go:89] found id: ""
	I0229 01:27:02.503604  325441 logs.go:276] 0 containers: []
	W0229 01:27:02.503613  325441 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:27:02.503619  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 01:27:02.503689  325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 01:27:02.557845  325441 cri.go:89] found id: ""
	I0229 01:27:02.557881  325441 logs.go:276] 0 containers: []
	W0229 01:27:02.557891  325441 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:27:02.557899  325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 01:27:02.557956  325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 01:27:02.596567  325441 cri.go:89] found id: ""
	I0229 01:27:02.596596  325441 logs.go:276] 0 containers: []
	W0229 01:27:02.596606  325441 logs.go:278] No container was found matching "kindnet"
	I0229 01:27:02.596620  325441 logs.go:123] Gathering logs for containerd ...
	I0229 01:27:02.596672  325441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 01:27:02.629878  325441 logs.go:123] Gathering logs for container status ...
	I0229 01:27:02.629916  325441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:27:02.672685  325441 logs.go:123] Gathering logs for kubelet ...
	I0229 01:27:02.672719  325441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:27:02.697224  325441 logs.go:138] Found kubelet problem: Feb 29 01:26:54 ingress-addon-legacy-671566 kubelet[6134]: F0229 01:26:54.305038    6134 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:27:02.702331  325441 logs.go:138] Found kubelet problem: Feb 29 01:26:55 ingress-addon-legacy-671566 kubelet[6161]: F0229 01:26:55.554030    6161 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:27:02.707679  325441 logs.go:138] Found kubelet problem: Feb 29 01:26:56 ingress-addon-legacy-671566 kubelet[6187]: F0229 01:26:56.762409    6187 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:27:02.713016  325441 logs.go:138] Found kubelet problem: Feb 29 01:26:58 ingress-addon-legacy-671566 kubelet[6213]: F0229 01:26:58.011021    6213 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:27:02.717824  325441 logs.go:138] Found kubelet problem: Feb 29 01:26:59 ingress-addon-legacy-671566 kubelet[6246]: F0229 01:26:59.297682    6246 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:27:02.722645  325441 logs.go:138] Found kubelet problem: Feb 29 01:27:00 ingress-addon-legacy-671566 kubelet[6275]: F0229 01:27:00.535137    6275 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:27:02.727451  325441 logs.go:138] Found kubelet problem: Feb 29 01:27:01 ingress-addon-legacy-671566 kubelet[6303]: F0229 01:27:01.826126    6303 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:27:02.730085  325441 logs.go:123] Gathering logs for dmesg ...
	I0229 01:27:02.730102  325441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:27:02.745045  325441 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:27:02.745068  325441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:27:02.809898  325441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0229 01:27:02.809966  325441 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:25:06.251317    3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:25:07.272412    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:25:07.275643    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:27:02.810013  325441 out.go:239] * 
	* 
	W0229 01:27:02.810130  325441 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:25:06.251317    3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:25:07.272412    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:25:07.275643    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:25:06.251317    3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:25:07.272412    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:25:07.275643    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:27:02.810164  325441 out.go:239] * 
	* 
	W0229 01:27:02.811097  325441 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:27:02.813112  325441 out.go:177] X Problems detected in kubelet:
	I0229 01:27:02.814535  325441 out.go:177]   Feb 29 01:26:54 ingress-addon-legacy-671566 kubelet[6134]: F0229 01:26:54.305038    6134 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:27:02.815713  325441 out.go:177]   Feb 29 01:26:55 ingress-addon-legacy-671566 kubelet[6161]: F0229 01:26:55.554030    6161 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:27:02.816773  325441 out.go:177]   Feb 29 01:26:56 ingress-addon-legacy-671566 kubelet[6187]: F0229 01:26:56.762409    6187 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:27:02.819477  325441 out.go:177] 
	W0229 01:27:02.820619  325441 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:25:06.251317    3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:25:07.272412    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:25:07.275643    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:25:06.251317    3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:25:07.272412    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:25:07.275643    3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:27:02.820675  325441 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:27:02.820694  325441 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:27:02.822003  325441 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-671566 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (291.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (119.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-671566 addons enable ingress --alsologtostderr -v=5
E0229 01:27:36.545320  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:28:58.466523  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-671566 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m58.982630602s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:27:02.947412  326328 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:27:02.947551  326328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:27:02.947559  326328 out.go:304] Setting ErrFile to fd 2...
	I0229 01:27:02.947563  326328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:27:02.947737  326328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:27:02.947994  326328 mustload.go:65] Loading cluster: ingress-addon-legacy-671566
	I0229 01:27:02.948343  326328 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 01:27:02.948366  326328 addons.go:597] checking whether the cluster is paused
	I0229 01:27:02.948444  326328 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 01:27:02.948473  326328 host.go:66] Checking if "ingress-addon-legacy-671566" exists ...
	I0229 01:27:02.948861  326328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:27:02.948910  326328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:27:02.963860  326328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I0229 01:27:02.964448  326328 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:27:02.965086  326328 main.go:141] libmachine: Using API Version  1
	I0229 01:27:02.965113  326328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:27:02.965558  326328 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:27:02.965761  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetState
	I0229 01:27:02.967516  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:27:02.967725  326328 ssh_runner.go:195] Run: systemctl --version
	I0229 01:27:02.967744  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:27:02.969736  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:27:02.970228  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:27:02.970254  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:27:02.970394  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:27:02.970586  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:27:02.970755  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:27:02.970880  326328 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:27:03.048677  326328 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 01:27:03.048769  326328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:27:03.088228  326328 cri.go:89] found id: ""
	I0229 01:27:03.088278  326328 main.go:141] libmachine: Making call to close driver server
	I0229 01:27:03.088289  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Close
	I0229 01:27:03.088595  326328 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:27:03.088624  326328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:27:03.088627  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Closing plugin on server side
	I0229 01:27:03.090770  326328 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:27:03.092028  326328 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 01:27:03.092043  326328 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-671566"
	I0229 01:27:03.092050  326328 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-671566"
	I0229 01:27:03.092081  326328 host.go:66] Checking if "ingress-addon-legacy-671566" exists ...
	I0229 01:27:03.092338  326328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:27:03.092371  326328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:27:03.107280  326328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0229 01:27:03.107649  326328 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:27:03.108125  326328 main.go:141] libmachine: Using API Version  1
	I0229 01:27:03.108147  326328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:27:03.108516  326328 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:27:03.109030  326328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:27:03.109112  326328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:27:03.122797  326328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0229 01:27:03.123118  326328 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:27:03.123584  326328 main.go:141] libmachine: Using API Version  1
	I0229 01:27:03.123603  326328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:27:03.123875  326328 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:27:03.124071  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetState
	I0229 01:27:03.125455  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:27:03.127322  326328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 01:27:03.128401  326328 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:27:03.129464  326328 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:27:03.130730  326328 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:27:03.130746  326328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 01:27:03.130761  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:27:03.133366  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:27:03.133725  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:27:03.133749  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:27:03.133839  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:27:03.133998  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:27:03.134160  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:27:03.134286  326328 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:27:03.224891  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:03.293313  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:03.293348  326328 retry.go:31] will retry after 229.567727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:03.523839  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:03.586100  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:03.586144  326328 retry.go:31] will retry after 347.378331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:03.933730  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:04.003754  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:04.003817  326328 retry.go:31] will retry after 427.94681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:04.432519  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:04.497530  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:04.497577  326328 retry.go:31] will retry after 445.924793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:04.944362  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:05.029306  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:05.029347  326328 retry.go:31] will retry after 1.316196688s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:06.346241  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:06.410843  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:06.410882  326328 retry.go:31] will retry after 2.180891008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:08.592604  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:08.679068  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:08.679119  326328 retry.go:31] will retry after 1.47270705s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:10.152794  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:10.229690  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:10.229727  326328 retry.go:31] will retry after 3.43112549s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:13.661717  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:13.758345  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:13.758379  326328 retry.go:31] will retry after 6.389296122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:20.149608  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:20.214899  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:20.214935  326328 retry.go:31] will retry after 5.149070872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:25.366407  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:25.429177  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:25.429215  326328 retry.go:31] will retry after 19.685061391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:45.116277  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:27:45.182924  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:27:45.182960  326328 retry.go:31] will retry after 30.325770625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:28:15.510933  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:28:15.597032  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:28:15.597073  326328 retry.go:31] will retry after 17.462368119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:28:33.060368  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:28:33.129173  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:28:33.129218  326328 retry.go:31] will retry after 28.61168895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:01.744025  326328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:01.846101  326328 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:01.846207  326328 main.go:141] libmachine: Making call to close driver server
	I0229 01:29:01.846226  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Close
	I0229 01:29:01.846563  326328 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:29:01.846584  326328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:29:01.846587  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Closing plugin on server side
	I0229 01:29:01.846593  326328 main.go:141] libmachine: Making call to close driver server
	I0229 01:29:01.846604  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Close
	I0229 01:29:01.846851  326328 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:29:01.846869  326328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:29:01.846870  326328 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Closing plugin on server side
	I0229 01:29:01.846887  326328 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-671566"
	I0229 01:29:01.848946  326328 out.go:177] * Verifying ingress addon...
	I0229 01:29:01.851420  326328 out.go:177] 
	W0229 01:29:01.852764  326328 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-671566" does not exist: client config: context "ingress-addon-legacy-671566" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-671566" does not exist: client config: context "ingress-addon-legacy-671566" does not exist]
	W0229 01:29:01.852784  326328 out.go:239] * 
	* 
	W0229 01:29:01.855451  326328 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:29:01.860638  326328 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-671566 -n ingress-addon-legacy-671566
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-671566 -n ingress-addon-legacy-671566: exit status 6 (242.119437ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:29:02.093246  326639 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-671566" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-671566" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (119.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (94.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-671566 addons enable ingress-dns --alsologtostderr -v=5
E0229 01:29:28.597358  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-671566 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m33.802249892s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:29:02.162771  326669 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:29:02.163020  326669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:29:02.163028  326669 out.go:304] Setting ErrFile to fd 2...
	I0229 01:29:02.163032  326669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:29:02.163237  326669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:29:02.163516  326669 mustload.go:65] Loading cluster: ingress-addon-legacy-671566
	I0229 01:29:02.163868  326669 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 01:29:02.163889  326669 addons.go:597] checking whether the cluster is paused
	I0229 01:29:02.163966  326669 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 01:29:02.163978  326669 host.go:66] Checking if "ingress-addon-legacy-671566" exists ...
	I0229 01:29:02.164324  326669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:29:02.164369  326669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:29:02.178914  326669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I0229 01:29:02.179364  326669 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:29:02.179984  326669 main.go:141] libmachine: Using API Version  1
	I0229 01:29:02.180012  326669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:29:02.180309  326669 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:29:02.180523  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetState
	I0229 01:29:02.181875  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:29:02.182110  326669 ssh_runner.go:195] Run: systemctl --version
	I0229 01:29:02.182133  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:29:02.184017  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:29:02.184327  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:29:02.184357  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:29:02.184440  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:29:02.184618  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:29:02.184769  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:29:02.184912  326669 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:29:02.265051  326669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 01:29:02.265138  326669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:29:02.313047  326669 cri.go:89] found id: ""
	I0229 01:29:02.313189  326669 main.go:141] libmachine: Making call to close driver server
	I0229 01:29:02.313228  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Close
	I0229 01:29:02.313588  326669 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:29:02.313615  326669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:29:02.313621  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Closing plugin on server side
	I0229 01:29:02.315836  326669 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:29:02.317347  326669 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 01:29:02.317365  326669 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-671566"
	I0229 01:29:02.317374  326669 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-671566"
	I0229 01:29:02.317418  326669 host.go:66] Checking if "ingress-addon-legacy-671566" exists ...
	I0229 01:29:02.317684  326669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:29:02.317727  326669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:29:02.332732  326669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41501
	I0229 01:29:02.333165  326669 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:29:02.333667  326669 main.go:141] libmachine: Using API Version  1
	I0229 01:29:02.333688  326669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:29:02.334020  326669 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:29:02.334481  326669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:29:02.334522  326669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:29:02.348712  326669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0229 01:29:02.349082  326669 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:29:02.349582  326669 main.go:141] libmachine: Using API Version  1
	I0229 01:29:02.349607  326669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:29:02.349900  326669 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:29:02.350099  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetState
	I0229 01:29:02.351510  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
	I0229 01:29:02.353275  326669 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 01:29:02.354604  326669 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:29:02.354620  326669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 01:29:02.354636  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
	I0229 01:29:02.357024  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:29:02.357340  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
	I0229 01:29:02.357357  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
	I0229 01:29:02.357499  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
	I0229 01:29:02.357704  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
	I0229 01:29:02.357867  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
	I0229 01:29:02.358025  326669 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
	I0229 01:29:02.449777  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:02.521136  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:02.521184  326669 retry.go:31] will retry after 178.321117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:02.700674  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:02.772806  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:02.772848  326669 retry.go:31] will retry after 188.44585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:02.962421  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:03.032325  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:03.032369  326669 retry.go:31] will retry after 682.426485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:03.715259  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:03.782567  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:03.782614  326669 retry.go:31] will retry after 850.161152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:04.633696  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:04.699600  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:04.699656  326669 retry.go:31] will retry after 1.836168271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:06.537701  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:06.605919  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:06.605955  326669 retry.go:31] will retry after 2.217996546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:08.825577  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:08.908066  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:08.908109  326669 retry.go:31] will retry after 3.410725587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:12.319652  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:12.403346  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:12.403383  326669 retry.go:31] will retry after 4.675780886s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:17.083308  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:17.154919  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:17.154964  326669 retry.go:31] will retry after 5.23227791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:22.387560  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:22.462125  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:22.462172  326669 retry.go:31] will retry after 5.724728867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:28.187138  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:28.254606  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:28.254641  326669 retry.go:31] will retry after 8.328520209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:36.587448  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:36.651514  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:36.651552  326669 retry.go:31] will retry after 12.236859522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:48.889514  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:29:48.952524  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:48.952558  326669 retry.go:31] will retry after 46.876023684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:35.832184  326669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:30:35.900707  326669 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:35.900801  326669 main.go:141] libmachine: Making call to close driver server
	I0229 01:30:35.900834  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Close
	I0229 01:30:35.901178  326669 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:30:35.901203  326669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:30:35.901213  326669 main.go:141] libmachine: Making call to close driver server
	I0229 01:30:35.901217  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Closing plugin on server side
	I0229 01:30:35.901223  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Close
	I0229 01:30:35.901454  326669 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:30:35.901470  326669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:30:35.901474  326669 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Closing plugin on server side
	I0229 01:30:35.903880  326669 out.go:177] 
	W0229 01:30:35.905190  326669 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0229 01:30:35.905205  326669 out.go:239] * 
	* 
	W0229 01:30:35.907914  326669 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:30:35.909207  326669 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-671566 -n ingress-addon-legacy-671566
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-671566 -n ingress-addon-legacy-671566: exit status 6 (235.675116ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:30:36.132891  326910 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-671566" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-671566" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (94.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-671566 -n ingress-addon-legacy-671566
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-671566 -n ingress-addon-legacy-671566: exit status 6 (232.320437ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:30:36.367287  326940 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-671566" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-671566" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (387.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0229 02:01:14.620884  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109 (4m48.284698001s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-335938] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node kubernetes-upgrade-335938 in cluster kubernetes-upgrade-335938
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:01:14.593346  340762 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:01:14.593529  340762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:01:14.593539  340762 out.go:304] Setting ErrFile to fd 2...
	I0229 02:01:14.593544  340762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:01:14.593842  340762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:01:14.595166  340762 out.go:298] Setting JSON to false
	I0229 02:01:14.597048  340762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6219,"bootTime":1709165856,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:01:14.597116  340762 start.go:139] virtualization: kvm guest
	I0229 02:01:14.599165  340762 out.go:177] * [kubernetes-upgrade-335938] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:01:14.600664  340762 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:01:14.600661  340762 notify.go:220] Checking for updates...
	I0229 02:01:14.601926  340762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:01:14.603092  340762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:01:14.604326  340762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:01:14.605546  340762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:01:14.606731  340762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:01:14.608226  340762 config.go:182] Loaded profile config "NoKubernetes-493829": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0229 02:01:14.608338  340762 config.go:182] Loaded profile config "cert-expiration-113971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:01:14.608436  340762 config.go:182] Loaded profile config "cert-options-900483": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:01:14.608527  340762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:01:14.649075  340762 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:01:14.650432  340762 start.go:299] selected driver: kvm2
	I0229 02:01:14.650452  340762 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:01:14.650473  340762 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:01:14.651580  340762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:01:14.651705  340762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:01:14.667302  340762 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:01:14.667362  340762 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:01:14.667680  340762 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 02:01:14.667767  340762 cni.go:84] Creating CNI manager for ""
	I0229 02:01:14.667796  340762 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:01:14.667810  340762 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:01:14.667823  340762 start_flags.go:323] config:
	{Name:kubernetes-upgrade-335938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-335938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:01:14.668023  340762 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:01:14.669683  340762 out.go:177] * Starting control plane node kubernetes-upgrade-335938 in cluster kubernetes-upgrade-335938
	I0229 02:01:14.670862  340762 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 02:01:14.670933  340762 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 02:01:14.670946  340762 cache.go:56] Caching tarball of preloaded images
	I0229 02:01:14.671048  340762 preload.go:174] Found /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:01:14.671063  340762 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 02:01:14.671181  340762 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/config.json ...
	I0229 02:01:14.671202  340762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/config.json: {Name:mka448ff759519b0c8cf254b36c21a3a2cdb0c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:01:14.671349  340762 start.go:365] acquiring machines lock for kubernetes-upgrade-335938: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:01:27.019491  340762 start.go:369] acquired machines lock for "kubernetes-upgrade-335938" in 12.348095514s
	I0229 02:01:27.019564  340762 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-335938 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-335938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:01:27.019755  340762 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 02:01:27.021673  340762 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:01:27.021916  340762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:01:27.021977  340762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:01:27.038967  340762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43327
	I0229 02:01:27.039406  340762 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:01:27.040020  340762 main.go:141] libmachine: Using API Version  1
	I0229 02:01:27.040050  340762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:01:27.040422  340762 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:01:27.040627  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetMachineName
	I0229 02:01:27.040796  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:27.040965  340762 start.go:159] libmachine.API.Create for "kubernetes-upgrade-335938" (driver="kvm2")
	I0229 02:01:27.041004  340762 client.go:168] LocalClient.Create starting
	I0229 02:01:27.041041  340762 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem
	I0229 02:01:27.041078  340762 main.go:141] libmachine: Decoding PEM data...
	I0229 02:01:27.041101  340762 main.go:141] libmachine: Parsing certificate...
	I0229 02:01:27.041186  340762 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem
	I0229 02:01:27.041215  340762 main.go:141] libmachine: Decoding PEM data...
	I0229 02:01:27.041235  340762 main.go:141] libmachine: Parsing certificate...
	I0229 02:01:27.041263  340762 main.go:141] libmachine: Running pre-create checks...
	I0229 02:01:27.041279  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .PreCreateCheck
	I0229 02:01:27.041724  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetConfigRaw
	I0229 02:01:27.042215  340762 main.go:141] libmachine: Creating machine...
	I0229 02:01:27.042235  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .Create
	I0229 02:01:27.042411  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Creating KVM machine...
	I0229 02:01:27.043694  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found existing default KVM network
	I0229 02:01:27.044939  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:27.044749  341025 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:a0:de} reservation:<nil>}
	I0229 02:01:27.046062  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:27.045973  341025 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026c5c0}
	I0229 02:01:27.051184  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | trying to create private KVM network mk-kubernetes-upgrade-335938 192.168.50.0/24...
	I0229 02:01:27.120766  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | private KVM network mk-kubernetes-upgrade-335938 192.168.50.0/24 created
	I0229 02:01:27.120799  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Setting up store path in /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938 ...
	I0229 02:01:27.120815  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:27.120743  341025 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:01:27.120853  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Building disk image from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 02:01:27.121027  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Downloading /home/jenkins/minikube-integration/18063-309085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:01:27.375952  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:27.375816  341025 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa...
	I0229 02:01:27.501459  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:27.501308  341025 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/kubernetes-upgrade-335938.rawdisk...
	I0229 02:01:27.501502  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Writing magic tar header
	I0229 02:01:27.501570  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Writing SSH key tar header
	I0229 02:01:27.501601  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:27.501472  341025 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938 ...
	I0229 02:01:27.501614  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938
	I0229 02:01:27.501668  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938 (perms=drwx------)
	I0229 02:01:27.501692  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines
	I0229 02:01:27.501709  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines (perms=drwxr-xr-x)
	I0229 02:01:27.501727  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube (perms=drwxr-xr-x)
	I0229 02:01:27.501760  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085 (perms=drwxrwxr-x)
	I0229 02:01:27.501774  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:01:27.501792  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085
	I0229 02:01:27.501804  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 02:01:27.501816  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Checking permissions on dir: /home/jenkins
	I0229 02:01:27.501823  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Checking permissions on dir: /home
	I0229 02:01:27.501837  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Skipping /home - not owner
	I0229 02:01:27.501854  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 02:01:27.501869  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 02:01:27.501878  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Creating domain...
	I0229 02:01:27.502935  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) define libvirt domain using xml: 
	I0229 02:01:27.502961  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) <domain type='kvm'>
	I0229 02:01:27.502969  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   <name>kubernetes-upgrade-335938</name>
	I0229 02:01:27.502974  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   <memory unit='MiB'>2200</memory>
	I0229 02:01:27.502983  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   <vcpu>2</vcpu>
	I0229 02:01:27.502992  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   <features>
	I0229 02:01:27.503021  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <acpi/>
	I0229 02:01:27.503042  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <apic/>
	I0229 02:01:27.503055  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <pae/>
	I0229 02:01:27.503065  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     
	I0229 02:01:27.503073  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   </features>
	I0229 02:01:27.503087  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   <cpu mode='host-passthrough'>
	I0229 02:01:27.503094  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   
	I0229 02:01:27.503103  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   </cpu>
	I0229 02:01:27.503117  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   <os>
	I0229 02:01:27.503132  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <type>hvm</type>
	I0229 02:01:27.503143  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <boot dev='cdrom'/>
	I0229 02:01:27.503154  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <boot dev='hd'/>
	I0229 02:01:27.503166  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <bootmenu enable='no'/>
	I0229 02:01:27.503176  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   </os>
	I0229 02:01:27.503203  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   <devices>
	I0229 02:01:27.503222  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <disk type='file' device='cdrom'>
	I0229 02:01:27.503240  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/boot2docker.iso'/>
	I0229 02:01:27.503251  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <target dev='hdc' bus='scsi'/>
	I0229 02:01:27.503261  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <readonly/>
	I0229 02:01:27.503271  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     </disk>
	I0229 02:01:27.503283  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <disk type='file' device='disk'>
	I0229 02:01:27.503301  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 02:01:27.503344  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/kubernetes-upgrade-335938.rawdisk'/>
	I0229 02:01:27.503364  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <target dev='hda' bus='virtio'/>
	I0229 02:01:27.503373  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     </disk>
	I0229 02:01:27.503378  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <interface type='network'>
	I0229 02:01:27.503388  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <source network='mk-kubernetes-upgrade-335938'/>
	I0229 02:01:27.503395  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <model type='virtio'/>
	I0229 02:01:27.503400  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     </interface>
	I0229 02:01:27.503407  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <interface type='network'>
	I0229 02:01:27.503415  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <source network='default'/>
	I0229 02:01:27.503422  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <model type='virtio'/>
	I0229 02:01:27.503431  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     </interface>
	I0229 02:01:27.503445  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <serial type='pty'>
	I0229 02:01:27.503465  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <target port='0'/>
	I0229 02:01:27.503475  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     </serial>
	I0229 02:01:27.503485  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <console type='pty'>
	I0229 02:01:27.503496  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <target type='serial' port='0'/>
	I0229 02:01:27.503505  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     </console>
	I0229 02:01:27.503512  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     <rng model='virtio'>
	I0229 02:01:27.503529  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)       <backend model='random'>/dev/random</backend>
	I0229 02:01:27.503540  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     </rng>
	I0229 02:01:27.503549  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     
	I0229 02:01:27.503560  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)     
	I0229 02:01:27.503570  340762 main.go:141] libmachine: (kubernetes-upgrade-335938)   </devices>
	I0229 02:01:27.503584  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) </domain>
	I0229 02:01:27.503598  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) 
	I0229 02:01:27.507488  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:05:d1:4d in network default
	I0229 02:01:27.508003  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Ensuring networks are active...
	I0229 02:01:27.508029  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:27.508695  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Ensuring network default is active
	I0229 02:01:27.509028  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Ensuring network mk-kubernetes-upgrade-335938 is active
	I0229 02:01:27.509514  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Getting domain xml...
	I0229 02:01:27.510239  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Creating domain...
	I0229 02:01:28.770978  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Waiting to get IP...
	I0229 02:01:28.771707  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:28.772191  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:28.772219  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:28.772142  341025 retry.go:31] will retry after 192.037632ms: waiting for machine to come up
	I0229 02:01:28.966957  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:28.967510  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:28.967541  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:28.967453  341025 retry.go:31] will retry after 357.836677ms: waiting for machine to come up
	I0229 02:01:29.327078  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:29.327624  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:29.327649  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:29.327561  341025 retry.go:31] will retry after 486.3763ms: waiting for machine to come up
	I0229 02:01:29.815165  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:29.815585  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:29.815614  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:29.815535  341025 retry.go:31] will retry after 586.752332ms: waiting for machine to come up
	I0229 02:01:30.404357  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:30.404865  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:30.404893  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:30.404824  341025 retry.go:31] will retry after 756.856631ms: waiting for machine to come up
	I0229 02:01:31.164104  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:31.164737  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:31.164766  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:31.164687  341025 retry.go:31] will retry after 639.446267ms: waiting for machine to come up
	I0229 02:01:31.805451  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:31.806010  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:31.806033  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:31.805972  341025 retry.go:31] will retry after 835.187452ms: waiting for machine to come up
	I0229 02:01:32.642303  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:32.642756  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:32.642825  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:32.642730  341025 retry.go:31] will retry after 927.850903ms: waiting for machine to come up
	I0229 02:01:33.571743  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:33.572149  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:33.572181  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:33.572099  341025 retry.go:31] will retry after 1.397905477s: waiting for machine to come up
	I0229 02:01:34.971434  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:34.971977  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:34.972025  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:34.971923  341025 retry.go:31] will retry after 2.257161629s: waiting for machine to come up
	I0229 02:01:37.231100  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:37.231594  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:37.231632  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:37.231535  341025 retry.go:31] will retry after 2.229650175s: waiting for machine to come up
	I0229 02:01:39.463214  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:39.463717  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:39.463746  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:39.463678  341025 retry.go:31] will retry after 3.53232894s: waiting for machine to come up
	I0229 02:01:42.997734  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:42.998230  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:42.998263  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:42.998171  341025 retry.go:31] will retry after 3.716905918s: waiting for machine to come up
	I0229 02:01:46.716428  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:46.716798  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find current IP address of domain kubernetes-upgrade-335938 in network mk-kubernetes-upgrade-335938
	I0229 02:01:46.716826  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | I0229 02:01:46.716740  341025 retry.go:31] will retry after 4.199050658s: waiting for machine to come up
	I0229 02:01:50.919173  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:50.919688  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Found IP for machine: 192.168.50.62
	I0229 02:01:50.919716  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Reserving static IP address...
	I0229 02:01:50.919729  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has current primary IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:50.920073  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-335938", mac: "52:54:00:e1:ea:4e", ip: "192.168.50.62"} in network mk-kubernetes-upgrade-335938
	I0229 02:01:50.992654  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Getting to WaitForSSH function...
	I0229 02:01:50.992690  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Reserved static IP address: 192.168.50.62
	I0229 02:01:50.992704  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Waiting for SSH to be available...
	I0229 02:01:50.995685  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:50.996093  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938
	I0229 02:01:50.996133  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-335938 interface with MAC address 52:54:00:e1:ea:4e
	I0229 02:01:50.996257  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Using SSH client type: external
	I0229 02:01:50.996291  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa (-rw-------)
	I0229 02:01:50.996348  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:01:50.996372  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | About to run SSH command:
	I0229 02:01:50.996392  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | exit 0
	I0229 02:01:50.999883  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | SSH cmd err, output: exit status 255: 
	I0229 02:01:50.999907  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 02:01:50.999918  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | command : exit 0
	I0229 02:01:50.999929  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | err     : exit status 255
	I0229 02:01:50.999942  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | output  : 
	I0229 02:01:54.001400  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Getting to WaitForSSH function...
	I0229 02:01:54.004026  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.004439  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.004482  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.004563  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Using SSH client type: external
	I0229 02:01:54.004592  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa (-rw-------)
	I0229 02:01:54.004637  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:01:54.004656  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | About to run SSH command:
	I0229 02:01:54.004671  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | exit 0
	I0229 02:01:54.126573  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | SSH cmd err, output: <nil>: 
	I0229 02:01:54.126871  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) KVM machine creation complete!
	I0229 02:01:54.127153  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetConfigRaw
	I0229 02:01:54.127737  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:54.127974  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:54.128160  340762 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 02:01:54.128179  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetState
	I0229 02:01:54.129550  340762 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 02:01:54.129564  340762 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 02:01:54.129569  340762 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 02:01:54.129592  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:54.131678  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.132087  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.132114  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.132264  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:54.132492  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.132671  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.132826  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:54.133027  340762 main.go:141] libmachine: Using SSH client type: native
	I0229 02:01:54.133274  340762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:01:54.133290  340762 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 02:01:54.233503  340762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:01:54.233525  340762 main.go:141] libmachine: Detecting the provisioner...
	I0229 02:01:54.233533  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:54.236326  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.236730  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.236769  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.236982  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:54.237237  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.237437  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.237634  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:54.237820  340762 main.go:141] libmachine: Using SSH client type: native
	I0229 02:01:54.238020  340762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:01:54.238035  340762 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 02:01:54.335409  340762 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 02:01:54.335483  340762 main.go:141] libmachine: found compatible host: buildroot
	I0229 02:01:54.335490  340762 main.go:141] libmachine: Provisioning with buildroot...
	I0229 02:01:54.335497  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetMachineName
	I0229 02:01:54.335802  340762 buildroot.go:166] provisioning hostname "kubernetes-upgrade-335938"
	I0229 02:01:54.335839  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetMachineName
	I0229 02:01:54.336068  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:54.338680  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.339041  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.339067  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.339229  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:54.339426  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.339560  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.339733  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:54.339869  340762 main.go:141] libmachine: Using SSH client type: native
	I0229 02:01:54.340043  340762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:01:54.340054  340762 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-335938 && echo "kubernetes-upgrade-335938" | sudo tee /etc/hostname
	I0229 02:01:54.454385  340762 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-335938
	
	I0229 02:01:54.454426  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:54.457436  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.457818  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.457851  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.458006  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:54.458223  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.458378  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.458487  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:54.458619  340762 main.go:141] libmachine: Using SSH client type: native
	I0229 02:01:54.458818  340762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:01:54.458846  340762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-335938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-335938/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-335938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:01:54.568924  340762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:01:54.568961  340762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:01:54.568991  340762 buildroot.go:174] setting up certificates
	I0229 02:01:54.569003  340762 provision.go:83] configureAuth start
	I0229 02:01:54.569014  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetMachineName
	I0229 02:01:54.569341  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetIP
	I0229 02:01:54.571762  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.572125  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.572152  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.572266  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:54.574467  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.574831  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.574879  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.574966  340762 provision.go:138] copyHostCerts
	I0229 02:01:54.575053  340762 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:01:54.575074  340762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:01:54.575128  340762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:01:54.575230  340762 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:01:54.575241  340762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:01:54.575260  340762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:01:54.575311  340762 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:01:54.575318  340762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:01:54.575333  340762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:01:54.575386  340762 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-335938 san=[192.168.50.62 192.168.50.62 localhost 127.0.0.1 minikube kubernetes-upgrade-335938]
	I0229 02:01:54.749210  340762 provision.go:172] copyRemoteCerts
	I0229 02:01:54.749270  340762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:01:54.749304  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:54.751989  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.752345  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.752375  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.752616  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:54.752825  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.752977  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:54.753094  340762 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:01:54.832825  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:01:54.859460  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 02:01:54.884911  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:01:54.910187  340762 provision.go:86] duration metric: configureAuth took 341.172529ms
	I0229 02:01:54.910217  340762 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:01:54.910410  340762 config.go:182] Loaded profile config "kubernetes-upgrade-335938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 02:01:54.910436  340762 main.go:141] libmachine: Checking connection to Docker...
	I0229 02:01:54.910450  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetURL
	I0229 02:01:54.911641  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Using libvirt version 6000000
	I0229 02:01:54.913601  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.914005  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.914032  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.914266  340762 main.go:141] libmachine: Docker is up and running!
	I0229 02:01:54.914278  340762 main.go:141] libmachine: Reticulating splines...
	I0229 02:01:54.914286  340762 client.go:171] LocalClient.Create took 27.873268941s
	I0229 02:01:54.914310  340762 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-335938" took 27.873346761s
	I0229 02:01:54.914336  340762 start.go:300] post-start starting for "kubernetes-upgrade-335938" (driver="kvm2")
	I0229 02:01:54.914352  340762 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:01:54.914374  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:54.914624  340762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:01:54.914649  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:54.916695  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.916995  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:54.917024  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:54.917133  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:54.917328  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:54.917469  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:54.917568  340762 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:01:54.997400  340762 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:01:55.002469  340762 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:01:55.002502  340762 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:01:55.002576  340762 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:01:55.002686  340762 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:01:55.002810  340762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:01:55.012961  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:01:55.039306  340762 start.go:303] post-start completed in 124.953999ms
	I0229 02:01:55.039360  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetConfigRaw
	I0229 02:01:55.039960  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetIP
	I0229 02:01:55.042679  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.043115  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:55.043145  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.043399  340762 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/config.json ...
	I0229 02:01:55.043608  340762 start.go:128] duration metric: createHost completed in 28.02384162s
	I0229 02:01:55.043637  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:55.046112  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.046456  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:55.046476  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.046588  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:55.046769  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:55.046939  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:55.047063  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:55.047228  340762 main.go:141] libmachine: Using SSH client type: native
	I0229 02:01:55.047388  340762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:01:55.047398  340762 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:01:55.147104  340762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172115.136041922
	
	I0229 02:01:55.147125  340762 fix.go:206] guest clock: 1709172115.136041922
	I0229 02:01:55.147132  340762 fix.go:219] Guest: 2024-02-29 02:01:55.136041922 +0000 UTC Remote: 2024-02-29 02:01:55.043622636 +0000 UTC m=+40.504750902 (delta=92.419286ms)
	I0229 02:01:55.147153  340762 fix.go:190] guest clock delta is within tolerance: 92.419286ms
	I0229 02:01:55.147158  340762 start.go:83] releasing machines lock for "kubernetes-upgrade-335938", held for 28.127630874s
	I0229 02:01:55.147180  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:55.147472  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetIP
	I0229 02:01:55.150231  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.150654  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:55.150680  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.150873  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:55.151405  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:55.151637  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:01:55.151767  340762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:01:55.151814  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:55.151864  340762 ssh_runner.go:195] Run: cat /version.json
	I0229 02:01:55.151884  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:01:55.154447  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.154690  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.154840  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:55.154866  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.154992  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:55.155143  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:55.155182  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:55.155199  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:55.155336  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:01:55.155357  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:55.155544  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:01:55.155537  340762 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:01:55.155697  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:01:55.155876  340762 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:01:55.257766  340762 ssh_runner.go:195] Run: systemctl --version
	I0229 02:01:55.265443  340762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:01:55.272439  340762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:01:55.272499  340762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:01:55.291639  340762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:01:55.291659  340762 start.go:475] detecting cgroup driver to use...
	I0229 02:01:55.291710  340762 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:01:55.326797  340762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:01:55.343252  340762 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:01:55.343311  340762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:01:55.360181  340762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:01:55.376796  340762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:01:55.511028  340762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:01:55.696529  340762 docker.go:233] disabling docker service ...
	I0229 02:01:55.696607  340762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:01:55.715293  340762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:01:55.729710  340762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:01:55.857141  340762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:01:55.984129  340762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:01:56.000804  340762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:01:56.023856  340762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 02:01:56.036051  340762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:01:56.050015  340762 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:01:56.050095  340762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:01:56.061552  340762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:01:56.072270  340762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:01:56.082887  340762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:01:56.093741  340762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:01:56.104812  340762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:01:56.115505  340762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:01:56.125040  340762 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:01:56.125105  340762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:01:56.139504  340762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:01:56.150203  340762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:01:56.317843  340762 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:01:56.353954  340762 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:01:56.354030  340762 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:01:56.361427  340762 retry.go:31] will retry after 857.648533ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:01:57.219735  340762 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:01:57.226011  340762 start.go:543] Will wait 60s for crictl version
	I0229 02:01:57.226087  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:01:57.230521  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:01:57.273705  340762 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:01:57.273782  340762 ssh_runner.go:195] Run: containerd --version
	I0229 02:01:57.304456  340762 ssh_runner.go:195] Run: containerd --version
	I0229 02:01:57.340438  340762 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	I0229 02:01:57.341808  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetIP
	I0229 02:01:57.344806  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:57.345200  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:01:57.345222  340762 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:01:57.345496  340762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:01:57.350306  340762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:01:57.364558  340762 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 02:01:57.364614  340762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:01:57.400960  340762 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:01:57.401041  340762 ssh_runner.go:195] Run: which lz4
	I0229 02:01:57.405654  340762 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 02:01:57.411040  340762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:01:57.411070  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (440628646 bytes)
	I0229 02:01:59.319892  340762 containerd.go:548] Took 1.914272 seconds to copy over tarball
	I0229 02:01:59.319975  340762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:02:01.898606  340762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.57859299s)
	I0229 02:02:01.898647  340762 containerd.go:555] Took 2.578722 seconds to extract the tarball
	I0229 02:02:01.898661  340762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:02:01.941693  340762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:02:02.063092  340762 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:02:02.101574  340762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:02:02.154897  340762 retry.go:31] will retry after 355.288593ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T02:02:02Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 02:02:02.510414  340762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:02:02.548668  340762 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:02:02.548696  340762 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:02:02.548788  340762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:02:02.548816  340762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:02:02.548845  340762 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:02:02.548846  340762 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:02:02.548815  340762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:02:02.549016  340762 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:02:02.548824  340762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:02:02.549214  340762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:02:02.550494  340762 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:02:02.550509  340762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:02:02.550563  340762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:02:02.550591  340762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:02:02.550570  340762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:02:02.550504  340762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:02:02.550576  340762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:02:02.550495  340762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:02:02.686158  340762 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.1" and sha "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0229 02:02:02.686221  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:02.715292  340762 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.16.0" and sha "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d"
	I0229 02:02:02.715391  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:02.768064  340762 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.16.0" and sha "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e"
	I0229 02:02:02.768145  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:02.773631  340762 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.3.15-0" and sha "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed"
	I0229 02:02:02.773707  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:02.867977  340762 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.16.0" and sha "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a"
	I0229 02:02:02.868053  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:02.871394  340762 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.2" and sha "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b"
	I0229 02:02:02.871473  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:02.882297  340762 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.16.0" and sha "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384"
	I0229 02:02:02.882379  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:03.080904  340762 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:02:03.080961  340762 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:02:03.081015  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:02:03.223287  340762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:02:03.223342  340762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:02:03.223421  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:02:03.536837  340762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:02:03.536917  340762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:02:03.536991  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:02:03.609393  340762 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:02:03.609453  340762 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:02:03.609505  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:02:03.880122  340762 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.012038227s)
	I0229 02:02:03.880210  340762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:02:03.880249  340762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:02:03.880324  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:02:03.880574  340762 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.00907489s)
	I0229 02:02:03.880646  340762 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:02:03.880693  340762 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:02:03.880735  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:02:03.886610  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:02:03.886704  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:02:03.886781  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:02:03.886831  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:02:03.886890  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:02:03.887430  340762 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.005031894s)
	I0229 02:02:03.887492  340762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:02:03.887529  340762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:02:03.887560  340762 ssh_runner.go:195] Run: which crictl
	I0229 02:02:03.887654  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:02:03.919984  340762 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 02:02:03.920125  340762 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:02:03.973614  340762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:02:04.127583  340762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:02:04.127675  340762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:02:04.127764  340762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:02:04.127838  340762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:02:04.127889  340762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:02:04.128107  340762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:02:04.288737  340762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:02:04.288811  340762 cache_images.go:92] LoadImages completed in 1.740102786s
	W0229 02:02:04.288883  340762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0229 02:02:04.288945  340762 ssh_runner.go:195] Run: sudo crictl info
	I0229 02:02:04.329174  340762 cni.go:84] Creating CNI manager for ""
	I0229 02:02:04.329199  340762 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:02:04.329219  340762 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:02:04.329242  340762 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-335938 NodeName:kubernetes-upgrade-335938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:02:04.329395  340762 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-335938"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-335938
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.62:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:02:04.329521  340762 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-335938 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-335938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:02:04.329605  340762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:02:04.343100  340762 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:02:04.343198  340762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:02:04.355500  340762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (446 bytes)
	I0229 02:02:04.375161  340762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:02:04.397949  340762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2191 bytes)
	I0229 02:02:04.420697  340762 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I0229 02:02:04.426312  340762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:02:04.444859  340762 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938 for IP: 192.168.50.62
	I0229 02:02:04.444900  340762 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:02:04.445064  340762 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 02:02:04.445128  340762 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 02:02:04.445190  340762 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.key
	I0229 02:02:04.445214  340762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.crt with IP's: []
	I0229 02:02:04.528833  340762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.crt ...
	I0229 02:02:04.528870  340762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.crt: {Name:mk86236f6776cefb416732f27091a8a4296dd232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:02:04.529069  340762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.key ...
	I0229 02:02:04.529088  340762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.key: {Name:mka37c77e4f1be5ce89c62cf2013bbdec1b8a54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:02:04.529205  340762 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key.568d448d
	I0229 02:02:04.529228  340762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.crt.568d448d with IP's: [192.168.50.62 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:02:04.954599  340762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.crt.568d448d ...
	I0229 02:02:04.954634  340762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.crt.568d448d: {Name:mk44cae42ce249899543dd664f59a070940f7fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:02:04.954836  340762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key.568d448d ...
	I0229 02:02:04.954856  340762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key.568d448d: {Name:mkc99d455ac892f3a5896fa198037bb05c982691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:02:04.954951  340762 certs.go:337] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.crt.568d448d -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.crt
	I0229 02:02:04.955050  340762 certs.go:341] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key.568d448d -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key
	I0229 02:02:04.955109  340762 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.key
	I0229 02:02:04.955124  340762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.crt with IP's: []
	I0229 02:02:05.088882  340762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.crt ...
	I0229 02:02:05.088919  340762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.crt: {Name:mk7f485035dfd20acaa5ac85ade2f97f40f3a66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:02:05.089121  340762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.key ...
	I0229 02:02:05.089137  340762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.key: {Name:mk1c384d81f237152bcdde7fac04bebd48cad93e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:02:05.089332  340762 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 02:02:05.089371  340762 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 02:02:05.089383  340762 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:02:05.089406  340762 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:02:05.089430  340762 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:02:05.089452  340762 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 02:02:05.089488  340762 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:02:05.090147  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:02:05.122994  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:02:05.157072  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:02:05.186539  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:02:05.215906  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:02:05.251078  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:02:05.279394  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:02:05.307645  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:02:05.336169  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 02:02:05.364117  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 02:02:05.390876  340762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:02:05.417550  340762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:02:05.442823  340762 ssh_runner.go:195] Run: openssl version
	I0229 02:02:05.450852  340762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:02:05.466583  340762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:02:05.472045  340762 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:02:05.472137  340762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:02:05.479039  340762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:02:05.493890  340762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 02:02:05.507637  340762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 02:02:05.513256  340762 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 02:02:05.513333  340762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 02:02:05.519832  340762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 02:02:05.534396  340762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 02:02:05.548816  340762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 02:02:05.554014  340762 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 02:02:05.554092  340762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 02:02:05.560738  340762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:02:05.573426  340762 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:02:05.578024  340762 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:02:05.578116  340762 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-335938 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.16.0 ClusterName:kubernetes-upgrade-335938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:02:05.578223  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 02:02:05.578284  340762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:02:05.624108  340762 cri.go:89] found id: ""
	I0229 02:02:05.624205  340762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:02:05.635871  340762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:02:05.647641  340762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:02:05.658870  340762 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:02:05.658921  340762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:02:05.782474  340762 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:02:05.782549  340762 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:02:06.042549  340762 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:02:06.042708  340762 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:02:06.042851  340762 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:02:06.280489  340762 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:02:06.282865  340762 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:02:06.293294  340762 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:02:06.438309  340762 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:02:06.440116  340762 out.go:204]   - Generating certificates and keys ...
	I0229 02:02:06.440230  340762 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:02:06.440336  340762 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:02:06.653881  340762 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:02:06.839266  340762 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:02:07.123339  340762 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:02:07.356050  340762 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:02:07.439245  340762 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:02:07.439641  340762 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-335938 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0229 02:02:07.520551  340762 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:02:07.521020  340762 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-335938 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0229 02:02:07.828650  340762 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:02:08.020965  340762 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:02:08.364508  340762 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:02:08.364850  340762 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:02:08.508276  340762 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:02:08.934225  340762 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:02:09.462770  340762 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:02:09.607228  340762 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:02:09.608373  340762 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:02:09.609760  340762 out.go:204]   - Booting up control plane ...
	I0229 02:02:09.609902  340762 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:02:09.615077  340762 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:02:09.616514  340762 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:02:09.617265  340762 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:02:09.620933  340762 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:02:49.621192  340762 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:02:49.623763  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:02:49.623994  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:02:54.624876  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:02:54.625061  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:03:04.626007  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:03:04.626324  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:03:24.628068  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:03:24.637652  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:04:04.627694  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:04:04.627976  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:04:04.628015  340762 kubeadm.go:322] 
	I0229 02:04:04.628081  340762 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:04:04.628143  340762 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:04:04.628167  340762 kubeadm.go:322] 
	I0229 02:04:04.628227  340762 kubeadm.go:322] This error is likely caused by:
	I0229 02:04:04.628288  340762 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:04:04.628429  340762 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:04:04.628441  340762 kubeadm.go:322] 
	I0229 02:04:04.628625  340762 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:04:04.628698  340762 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:04:04.628734  340762 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:04:04.628740  340762 kubeadm.go:322] 
	I0229 02:04:04.628876  340762 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:04:04.628994  340762 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:04:04.629098  340762 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:04:04.629167  340762 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:04:04.629264  340762 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:04:04.629292  340762 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:04:04.630201  340762 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:04:04.630322  340762 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:04:04.630408  340762 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:04:04.630545  340762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-335938 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-335938 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-335938 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-335938 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:04:04.630609  340762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:04:05.176806  340762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:04:05.197913  340762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:04:05.212251  340762 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:04:05.212307  340762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:04:05.299751  340762 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:04:05.300010  340762 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:04:05.477356  340762 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:04:05.477485  340762 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:04:05.477618  340762 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:04:05.791755  340762 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:04:05.793258  340762 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:04:05.805521  340762 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:04:05.963338  340762 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:04:05.966801  340762 out.go:204]   - Generating certificates and keys ...
	I0229 02:04:05.966921  340762 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:04:05.967007  340762 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:04:05.967101  340762 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:04:05.967217  340762 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:04:05.967317  340762 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:04:05.967392  340762 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:04:05.967492  340762 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:04:05.967745  340762 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:04:05.969717  340762 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:04:05.970902  340762 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:04:05.971114  340762 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:04:05.971213  340762 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:04:06.099409  340762 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:04:06.786177  340762 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:04:06.977631  340762 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:04:07.142807  340762 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:04:07.144511  340762 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:04:07.146174  340762 out.go:204]   - Booting up control plane ...
	I0229 02:04:07.146304  340762 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:04:07.152266  340762 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:04:07.153437  340762 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:04:07.154357  340762 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:04:07.161135  340762 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:04:47.163078  340762 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:04:47.163553  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:04:47.163774  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:04:52.164999  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:04:52.165274  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:05:02.166021  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:05:02.166265  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:05:22.167775  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:05:22.168026  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:06:02.168511  340762 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:06:02.168796  340762 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:06:02.168829  340762 kubeadm.go:322] 
	I0229 02:06:02.168907  340762 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:06:02.169074  340762 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:06:02.169088  340762 kubeadm.go:322] 
	I0229 02:06:02.169118  340762 kubeadm.go:322] This error is likely caused by:
	I0229 02:06:02.169180  340762 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:06:02.169339  340762 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:06:02.169354  340762 kubeadm.go:322] 
	I0229 02:06:02.169530  340762 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:06:02.169600  340762 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:06:02.169649  340762 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:06:02.169658  340762 kubeadm.go:322] 
	I0229 02:06:02.169735  340762 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:06:02.169863  340762 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:06:02.170016  340762 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:06:02.170098  340762 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:06:02.170185  340762 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:06:02.170218  340762 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:06:02.172083  340762 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:06:02.172205  340762 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:06:02.172301  340762 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:06:02.172395  340762 kubeadm.go:406] StartCluster complete in 3m56.594292055s
	I0229 02:06:02.172464  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:06:02.172532  340762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:06:02.259396  340762 cri.go:89] found id: ""
	I0229 02:06:02.259431  340762 logs.go:276] 0 containers: []
	W0229 02:06:02.259444  340762 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:06:02.259452  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:06:02.259580  340762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:06:02.309734  340762 cri.go:89] found id: ""
	I0229 02:06:02.309770  340762 logs.go:276] 0 containers: []
	W0229 02:06:02.309784  340762 logs.go:278] No container was found matching "etcd"
	I0229 02:06:02.309792  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:06:02.309857  340762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:06:02.348233  340762 cri.go:89] found id: ""
	I0229 02:06:02.348259  340762 logs.go:276] 0 containers: []
	W0229 02:06:02.348284  340762 logs.go:278] No container was found matching "coredns"
	I0229 02:06:02.348290  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:06:02.348346  340762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:06:02.388930  340762 cri.go:89] found id: ""
	I0229 02:06:02.388955  340762 logs.go:276] 0 containers: []
	W0229 02:06:02.388963  340762 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:06:02.388970  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:06:02.389021  340762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:06:02.426823  340762 cri.go:89] found id: ""
	I0229 02:06:02.426851  340762 logs.go:276] 0 containers: []
	W0229 02:06:02.426859  340762 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:06:02.426865  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:06:02.426930  340762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:06:02.462693  340762 cri.go:89] found id: ""
	I0229 02:06:02.462729  340762 logs.go:276] 0 containers: []
	W0229 02:06:02.462741  340762 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:06:02.462749  340762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:06:02.462825  340762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:06:02.511120  340762 cri.go:89] found id: ""
	I0229 02:06:02.511151  340762 logs.go:276] 0 containers: []
	W0229 02:06:02.511163  340762 logs.go:278] No container was found matching "kindnet"
	I0229 02:06:02.511175  340762 logs.go:123] Gathering logs for kubelet ...
	I0229 02:06:02.511187  340762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:06:02.560521  340762 logs.go:123] Gathering logs for dmesg ...
	I0229 02:06:02.560555  340762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:06:02.577461  340762 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:06:02.577502  340762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:06:02.717046  340762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:06:02.717076  340762 logs.go:123] Gathering logs for containerd ...
	I0229 02:06:02.717094  340762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:06:02.756129  340762 logs.go:123] Gathering logs for container status ...
	I0229 02:06:02.756162  340762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:06:02.804548  340762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:06:02.804615  340762 out.go:239] * 
	* 
	W0229 02:06:02.804697  340762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:06:02.804730  340762 out.go:239] * 
	* 
	W0229 02:06:02.805647  340762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:06:02.808504  340762 out.go:177] 
	W0229 02:06:02.809654  340762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:06:02.809736  340762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:06:02.809759  340762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:06:02.811241  340762 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-335938
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-335938: (1.307662394s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-335938 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-335938 status --format={{.Host}}: exit status 7 (78.192265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (50.6090218s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-335938 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (94.608403ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-335938] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-335938
	    minikube start -p kubernetes-upgrade-335938 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3359382 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-335938 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-335938 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (42.030950703s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 02:07:37.046718742 +0000 UTC m=+3429.213744432
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-335938 -n kubernetes-upgrade-335938
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-335938 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-335938 logs -n 25: (2.460279862s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo cat                           | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo cat                           | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo cat                           | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo docker                        | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo cat                           | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo cat                           | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo cat                           | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo cat                           | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC |                     |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo                               | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo find                          | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-704272 sudo crio                          | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p kindnet-704272                                    | kindnet-704272 | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC | 29 Feb 24 02:07 UTC |
	| start   | -p bridge-704272 --memory=3072                       | bridge-704272  | jenkins | v1.32.0 | 29 Feb 24 02:07 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=containerd                       |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:07:25
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:07:25.891178  352533 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:07:25.891288  352533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:07:25.891297  352533 out.go:304] Setting ErrFile to fd 2...
	I0229 02:07:25.891301  352533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:07:25.891506  352533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:07:25.892056  352533 out.go:298] Setting JSON to false
	I0229 02:07:25.893283  352533 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6590,"bootTime":1709165856,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:07:25.893358  352533 start.go:139] virtualization: kvm guest
	I0229 02:07:25.895398  352533 out.go:177] * [bridge-704272] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:07:25.896915  352533 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:07:25.898017  352533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:07:25.896972  352533 notify.go:220] Checking for updates...
	I0229 02:07:25.900160  352533 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:07:25.901288  352533 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:07:25.902541  352533 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:07:25.903774  352533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:07:25.905526  352533 config.go:182] Loaded profile config "enable-default-cni-704272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:07:25.905622  352533 config.go:182] Loaded profile config "flannel-704272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:07:25.905696  352533 config.go:182] Loaded profile config "kubernetes-upgrade-335938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 02:07:25.905776  352533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:07:25.941192  352533 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:07:25.942356  352533 start.go:299] selected driver: kvm2
	I0229 02:07:25.942371  352533 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:07:25.942381  352533 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:07:25.943056  352533 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:07:25.943150  352533 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:07:25.958764  352533 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:07:25.958820  352533 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:07:25.959026  352533 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:07:25.959097  352533 cni.go:84] Creating CNI manager for "bridge"
	I0229 02:07:25.959110  352533 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:07:25.959119  352533 start_flags.go:323] config:
	{Name:bridge-704272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-704272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:07:25.959274  352533 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:07:25.960745  352533 out.go:177] * Starting control plane node bridge-704272 in cluster bridge-704272
	I0229 02:07:24.879618  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:24.880278  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has current primary IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:24.880310  350492 main.go:141] libmachine: (enable-default-cni-704272) Found IP for machine: 192.168.72.111
	I0229 02:07:24.880323  350492 main.go:141] libmachine: (enable-default-cni-704272) Reserving static IP address...
	I0229 02:07:24.880663  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-704272", mac: "52:54:00:58:15:66", ip: "192.168.72.111"} in network mk-enable-default-cni-704272
	I0229 02:07:24.960581  350492 main.go:141] libmachine: (enable-default-cni-704272) Reserved static IP address: 192.168.72.111
	I0229 02:07:24.960616  350492 main.go:141] libmachine: (enable-default-cni-704272) Waiting for SSH to be available...
	I0229 02:07:24.960626  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Getting to WaitForSSH function...
	I0229 02:07:24.963854  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:24.964410  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272
	I0229 02:07:24.964443  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | unable to find defined IP address of network mk-enable-default-cni-704272 interface with MAC address 52:54:00:58:15:66
	I0229 02:07:24.964588  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Using SSH client type: external
	I0229 02:07:24.964628  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa (-rw-------)
	I0229 02:07:24.964682  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:07:24.964696  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | About to run SSH command:
	I0229 02:07:24.964712  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | exit 0
	I0229 02:07:24.969155  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | SSH cmd err, output: exit status 255: 
	I0229 02:07:24.969183  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 02:07:24.969214  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | command : exit 0
	I0229 02:07:24.969224  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | err     : exit status 255
	I0229 02:07:24.969236  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | output  : 
	I0229 02:07:27.970374  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Getting to WaitForSSH function...
	I0229 02:07:27.972630  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:27.973112  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:27.973145  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:27.973236  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Using SSH client type: external
	I0229 02:07:27.973270  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa (-rw-------)
	I0229 02:07:27.973290  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:07:27.973306  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | About to run SSH command:
	I0229 02:07:27.973316  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | exit 0
	I0229 02:07:28.106382  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | SSH cmd err, output: <nil>: 
	I0229 02:07:28.106608  350492 main.go:141] libmachine: (enable-default-cni-704272) KVM machine creation complete!
	I0229 02:07:28.106929  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetConfigRaw
	I0229 02:07:28.107575  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .DriverName
	I0229 02:07:28.107778  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .DriverName
	I0229 02:07:28.107984  350492 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 02:07:28.108001  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetState
	I0229 02:07:28.109464  350492 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 02:07:28.109477  350492 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 02:07:28.109482  350492 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 02:07:28.109488  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:28.112545  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.112975  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:28.113004  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.113149  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:28.113308  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.113464  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.113650  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:28.113828  350492 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:28.114104  350492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0229 02:07:28.114120  350492 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 02:07:28.229740  350492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:07:28.229761  350492 main.go:141] libmachine: Detecting the provisioner...
	I0229 02:07:28.229775  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:28.232985  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.233405  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:28.233436  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.233657  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:28.233878  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.234119  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.234290  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:28.234473  350492 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:28.234691  350492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0229 02:07:28.234703  350492 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 02:07:28.347406  350492 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 02:07:28.347493  350492 main.go:141] libmachine: found compatible host: buildroot
	I0229 02:07:28.347504  350492 main.go:141] libmachine: Provisioning with buildroot...
	I0229 02:07:28.347513  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetMachineName
	I0229 02:07:28.347798  350492 buildroot.go:166] provisioning hostname "enable-default-cni-704272"
	I0229 02:07:28.347833  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetMachineName
	I0229 02:07:28.348058  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:28.350845  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.351221  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:28.351265  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.351391  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:28.351572  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.351747  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.351884  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:28.352106  350492 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:28.352304  350492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0229 02:07:28.352322  350492 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-704272 && echo "enable-default-cni-704272" | sudo tee /etc/hostname
	I0229 02:07:24.603003  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:25.103309  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:25.603343  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:26.103332  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:26.603910  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:27.103803  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:27.603817  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:28.103030  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:28.603516  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:29.103323  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:29.331709  350824 start.go:369] acquired machines lock for "kubernetes-upgrade-335938" in 34.159425384s
	I0229 02:07:29.331770  350824 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:07:29.331779  350824 fix.go:54] fixHost starting: 
	I0229 02:07:29.332168  350824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:29.332199  350824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:29.352760  350824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I0229 02:07:29.353247  350824 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:29.353772  350824 main.go:141] libmachine: Using API Version  1
	I0229 02:07:29.353814  350824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:29.354181  350824 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:29.354363  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:29.354531  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetState
	I0229 02:07:29.356176  350824 fix.go:102] recreateIfNeeded on kubernetes-upgrade-335938: state=Running err=<nil>
	W0229 02:07:29.356210  350824 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:07:29.357817  350824 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-335938" VM ...
	I0229 02:07:29.359096  350824 machine.go:88] provisioning docker machine ...
	I0229 02:07:29.359124  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:29.359332  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetMachineName
	I0229 02:07:29.359475  350824 buildroot.go:166] provisioning hostname "kubernetes-upgrade-335938"
	I0229 02:07:29.359493  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetMachineName
	I0229 02:07:29.359657  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:29.362308  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.362779  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:29.362811  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.362925  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:29.363139  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:29.363306  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:29.363433  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:29.363630  350824 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:29.363872  350824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:07:29.363891  350824 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-335938 && echo "kubernetes-upgrade-335938" | sudo tee /etc/hostname
	I0229 02:07:29.499602  350824 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-335938
	
	I0229 02:07:29.499632  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:29.502755  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.503159  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:29.503214  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.503479  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:29.503692  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:29.503902  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:29.504071  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:29.504235  350824 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:29.504471  350824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:07:29.504499  350824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-335938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-335938/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-335938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:07:29.624908  350824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:07:29.624940  350824 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:07:29.625002  350824 buildroot.go:174] setting up certificates
	I0229 02:07:29.625014  350824 provision.go:83] configureAuth start
	I0229 02:07:29.625025  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetMachineName
	I0229 02:07:29.625362  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetIP
	I0229 02:07:29.628613  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.629069  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:29.629100  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.629270  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:29.632002  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.632466  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:29.632496  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.632634  350824 provision.go:138] copyHostCerts
	I0229 02:07:29.632696  350824 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:07:29.632719  350824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:07:29.632780  350824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:07:29.632904  350824 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:07:29.632916  350824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:07:29.632946  350824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:07:29.633026  350824 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:07:29.633036  350824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:07:29.633057  350824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:07:29.633105  350824 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-335938 san=[192.168.50.62 192.168.50.62 localhost 127.0.0.1 minikube kubernetes-upgrade-335938]
	I0229 02:07:29.903582  350824 provision.go:172] copyRemoteCerts
	I0229 02:07:29.903658  350824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:07:29.903695  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:29.906999  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.907479  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:29.907511  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:29.907703  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:29.907921  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:29.908099  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:29.908265  350824 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:07:29.994723  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:07:30.025062  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:07:30.060533  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 02:07:25.961815  352533 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:07:25.961851  352533 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 02:07:25.961861  352533 cache.go:56] Caching tarball of preloaded images
	I0229 02:07:25.961925  352533 preload.go:174] Found /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:07:25.961935  352533 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 02:07:25.962052  352533 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/config.json ...
	I0229 02:07:25.962118  352533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/config.json: {Name:mk67986b8b4e606dbad379456cce8220118f98c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:07:25.962267  352533 start.go:365] acquiring machines lock for bridge-704272: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:07:30.372535  352533 start.go:369] acquired machines lock for "bridge-704272" in 4.410219079s
	I0229 02:07:30.372588  352533 start.go:93] Provisioning new machine with config: &{Name:bridge-704272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:bridge-704272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:07:30.372724  352533 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 02:07:30.375619  352533 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 02:07:30.375890  352533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:30.375981  352533 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:30.397251  352533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0229 02:07:30.397910  352533 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:30.402074  352533 main.go:141] libmachine: Using API Version  1
	I0229 02:07:30.402364  352533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:30.402832  352533 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:30.404898  352533 main.go:141] libmachine: (bridge-704272) Calling .GetMachineName
	I0229 02:07:30.405083  352533 main.go:141] libmachine: (bridge-704272) Calling .DriverName
	I0229 02:07:30.405250  352533 start.go:159] libmachine.API.Create for "bridge-704272" (driver="kvm2")
	I0229 02:07:30.405294  352533 client.go:168] LocalClient.Create starting
	I0229 02:07:30.405324  352533 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem
	I0229 02:07:30.405363  352533 main.go:141] libmachine: Decoding PEM data...
	I0229 02:07:30.405380  352533 main.go:141] libmachine: Parsing certificate...
	I0229 02:07:30.405449  352533 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem
	I0229 02:07:30.405475  352533 main.go:141] libmachine: Decoding PEM data...
	I0229 02:07:30.405488  352533 main.go:141] libmachine: Parsing certificate...
	I0229 02:07:30.405525  352533 main.go:141] libmachine: Running pre-create checks...
	I0229 02:07:30.405536  352533 main.go:141] libmachine: (bridge-704272) Calling .PreCreateCheck
	I0229 02:07:30.405976  352533 main.go:141] libmachine: (bridge-704272) Calling .GetConfigRaw
	I0229 02:07:30.406758  352533 main.go:141] libmachine: Creating machine...
	I0229 02:07:30.406782  352533 main.go:141] libmachine: (bridge-704272) Calling .Create
	I0229 02:07:30.406951  352533 main.go:141] libmachine: (bridge-704272) Creating KVM machine...
	I0229 02:07:30.408519  352533 main.go:141] libmachine: (bridge-704272) DBG | found existing default KVM network
	I0229 02:07:30.410715  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:30.410533  352575 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000210c70}
	I0229 02:07:30.416993  352533 main.go:141] libmachine: (bridge-704272) DBG | trying to create private KVM network mk-bridge-704272 192.168.39.0/24...
	I0229 02:07:30.496269  352533 main.go:141] libmachine: (bridge-704272) Setting up store path in /home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272 ...
	I0229 02:07:30.496311  352533 main.go:141] libmachine: (bridge-704272) DBG | private KVM network mk-bridge-704272 192.168.39.0/24 created
	I0229 02:07:30.496326  352533 main.go:141] libmachine: (bridge-704272) Building disk image from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 02:07:30.496391  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:30.496207  352575 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:07:30.496433  352533 main.go:141] libmachine: (bridge-704272) Downloading /home/jenkins/minikube-integration/18063-309085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:07:30.778991  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:30.778862  352575 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272/id_rsa...
	I0229 02:07:29.603382  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:30.103282  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:30.604036  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:31.103295  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:31.603242  349342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:07:31.723119  349342 kubeadm.go:1088] duration metric: took 10.274968053s to wait for elevateKubeSystemPrivileges.
	I0229 02:07:31.723162  349342 kubeadm.go:406] StartCluster complete in 23.112820002s
	I0229 02:07:31.723188  349342 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:07:31.723275  349342 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:07:31.724837  349342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:07:31.725121  349342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:07:31.725255  349342 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:07:31.725356  349342 addons.go:69] Setting storage-provisioner=true in profile "flannel-704272"
	I0229 02:07:31.725375  349342 addons.go:234] Setting addon storage-provisioner=true in "flannel-704272"
	I0229 02:07:31.725377  349342 addons.go:69] Setting default-storageclass=true in profile "flannel-704272"
	I0229 02:07:31.725412  349342 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-704272"
	I0229 02:07:31.725429  349342 config.go:182] Loaded profile config "flannel-704272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:07:31.725463  349342 host.go:66] Checking if "flannel-704272" exists ...
	I0229 02:07:31.725917  349342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:31.725922  349342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:31.725967  349342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:31.726026  349342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:31.745827  349342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0229 02:07:31.746657  349342 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:31.746808  349342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39955
	I0229 02:07:31.747183  349342 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:31.747321  349342 main.go:141] libmachine: Using API Version  1
	I0229 02:07:31.747343  349342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:31.747768  349342 main.go:141] libmachine: Using API Version  1
	I0229 02:07:31.747787  349342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:31.747817  349342 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:31.748003  349342 main.go:141] libmachine: (flannel-704272) Calling .GetState
	I0229 02:07:31.748284  349342 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:31.748869  349342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:31.748914  349342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:31.752957  349342 addons.go:234] Setting addon default-storageclass=true in "flannel-704272"
	I0229 02:07:31.753034  349342 host.go:66] Checking if "flannel-704272" exists ...
	I0229 02:07:31.753476  349342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:31.753562  349342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:31.770723  349342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0229 02:07:31.771152  349342 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:31.771840  349342 main.go:141] libmachine: Using API Version  1
	I0229 02:07:31.771860  349342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:31.772322  349342 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:31.778213  349342 main.go:141] libmachine: (flannel-704272) Calling .GetState
	I0229 02:07:31.778275  349342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0229 02:07:31.780689  349342 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:31.781276  349342 main.go:141] libmachine: (flannel-704272) Calling .DriverName
	I0229 02:07:31.781321  349342 main.go:141] libmachine: Using API Version  1
	I0229 02:07:31.781339  349342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:31.783323  349342 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:07:31.781786  349342 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:28.483954  350492 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-704272
	
	I0229 02:07:28.483989  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:28.487055  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.487480  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:28.487532  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.487748  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:28.487998  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.488228  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.488408  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:28.488619  350492 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:28.488859  350492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0229 02:07:28.488885  350492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-704272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-704272/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-704272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:07:28.616937  350492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:07:28.616970  350492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:07:28.617000  350492 buildroot.go:174] setting up certificates
	I0229 02:07:28.617013  350492 provision.go:83] configureAuth start
	I0229 02:07:28.617023  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetMachineName
	I0229 02:07:28.617339  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetIP
	I0229 02:07:28.620380  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.620781  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:28.620818  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.620986  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:28.623557  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.623974  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:28.624016  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.624184  350492 provision.go:138] copyHostCerts
	I0229 02:07:28.624244  350492 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:07:28.624261  350492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:07:28.624327  350492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:07:28.624417  350492 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:07:28.624430  350492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:07:28.624456  350492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:07:28.624538  350492 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:07:28.624549  350492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:07:28.624582  350492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:07:28.624651  350492 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-704272 san=[192.168.72.111 192.168.72.111 localhost 127.0.0.1 minikube enable-default-cni-704272]
	I0229 02:07:28.892648  350492 provision.go:172] copyRemoteCerts
	I0229 02:07:28.892717  350492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:07:28.892750  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:28.895343  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.895685  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:28.895711  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:28.895887  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:28.896100  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:28.896272  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:28.896431  350492 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa Username:docker}
	I0229 02:07:28.986891  350492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:07:29.014235  350492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 02:07:29.041808  350492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:07:29.067548  350492 provision.go:86] duration metric: configureAuth took 450.520564ms
	I0229 02:07:29.067583  350492 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:07:29.067743  350492 config.go:182] Loaded profile config "enable-default-cni-704272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:07:29.067768  350492 main.go:141] libmachine: Checking connection to Docker...
	I0229 02:07:29.067779  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetURL
	I0229 02:07:29.069029  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | Using libvirt version 6000000
	I0229 02:07:29.071135  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.071455  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:29.071499  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.071622  350492 main.go:141] libmachine: Docker is up and running!
	I0229 02:07:29.071640  350492 main.go:141] libmachine: Reticulating splines...
	I0229 02:07:29.071649  350492 client.go:171] LocalClient.Create took 30.092766709s
	I0229 02:07:29.071672  350492 start.go:167] duration metric: libmachine.API.Create for "enable-default-cni-704272" took 30.092831341s
	I0229 02:07:29.071687  350492 start.go:300] post-start starting for "enable-default-cni-704272" (driver="kvm2")
	I0229 02:07:29.071702  350492 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:07:29.071725  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .DriverName
	I0229 02:07:29.071984  350492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:07:29.072011  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:29.074208  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.074555  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:29.074580  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.074716  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:29.074891  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:29.075050  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:29.075179  350492 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa Username:docker}
	I0229 02:07:29.163845  350492 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:07:29.169374  350492 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:07:29.169401  350492 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:07:29.169463  350492 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:07:29.169539  350492 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:07:29.169618  350492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:07:29.180979  350492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:07:29.211392  350492 start.go:303] post-start completed in 139.688605ms
	I0229 02:07:29.211470  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetConfigRaw
	I0229 02:07:29.212072  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetIP
	I0229 02:07:29.214839  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.215206  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:29.215235  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.215493  350492 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/config.json ...
	I0229 02:07:29.215666  350492 start.go:128] duration metric: createHost completed in 30.262266614s
	I0229 02:07:29.215698  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:29.217955  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.218313  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:29.218337  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.218491  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:29.218645  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:29.218823  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:29.218948  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:29.219145  350492 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:29.219366  350492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0229 02:07:29.219382  350492 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:07:29.331544  350492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172449.319087292
	
	I0229 02:07:29.331578  350492 fix.go:206] guest clock: 1709172449.319087292
	I0229 02:07:29.331588  350492 fix.go:219] Guest: 2024-02-29 02:07:29.319087292 +0000 UTC Remote: 2024-02-29 02:07:29.215681002 +0000 UTC m=+55.811789056 (delta=103.40629ms)
	I0229 02:07:29.331610  350492 fix.go:190] guest clock delta is within tolerance: 103.40629ms
	I0229 02:07:29.331615  350492 start.go:83] releasing machines lock for "enable-default-cni-704272", held for 30.378414459s
	I0229 02:07:29.331639  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .DriverName
	I0229 02:07:29.331946  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetIP
	I0229 02:07:29.334602  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.334994  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:29.335022  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.335163  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .DriverName
	I0229 02:07:29.335883  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .DriverName
	I0229 02:07:29.336111  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .DriverName
	I0229 02:07:29.336230  350492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:07:29.336281  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:29.336292  350492 ssh_runner.go:195] Run: cat /version.json
	I0229 02:07:29.336306  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHHostname
	I0229 02:07:29.338825  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.339112  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.339148  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:29.339176  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.339340  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:29.339511  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:29.339617  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:29.339644  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:29.339704  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:29.339898  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHPort
	I0229 02:07:29.339897  350492 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa Username:docker}
	I0229 02:07:29.340060  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHKeyPath
	I0229 02:07:29.340225  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetSSHUsername
	I0229 02:07:29.340413  350492 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/enable-default-cni-704272/id_rsa Username:docker}
	I0229 02:07:29.424102  350492 ssh_runner.go:195] Run: systemctl --version
	I0229 02:07:29.453278  350492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:07:29.459757  350492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:07:29.459822  350492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:07:29.477854  350492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:07:29.477878  350492 start.go:475] detecting cgroup driver to use...
	I0229 02:07:29.477938  350492 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:07:29.512852  350492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:07:29.528374  350492 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:07:29.528445  350492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:07:29.545356  350492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:07:29.561150  350492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:07:29.702662  350492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:07:29.885023  350492 docker.go:233] disabling docker service ...
	I0229 02:07:29.885097  350492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:07:29.902530  350492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:07:29.918202  350492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:07:30.082457  350492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:07:30.231528  350492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:07:30.251807  350492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:07:30.275127  350492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:07:30.291284  350492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:07:30.305801  350492 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:07:30.305885  350492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:07:30.319370  350492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:07:30.333310  350492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:07:30.345944  350492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:07:30.358947  350492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:07:30.372134  350492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:07:30.389887  350492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:07:30.404459  350492 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:07:30.404516  350492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:07:30.426830  350492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:07:30.444226  350492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:07:30.597224  350492 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:07:30.633314  350492 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:07:30.633381  350492 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:07:30.639834  350492 retry.go:31] will retry after 1.217593895s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:07:31.858235  350492 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:07:31.867310  350492 start.go:543] Will wait 60s for crictl version
	I0229 02:07:31.867392  350492 ssh_runner.go:195] Run: which crictl
	I0229 02:07:31.873002  350492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:07:31.935329  350492 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:07:31.935407  350492 ssh_runner.go:195] Run: containerd --version
	I0229 02:07:31.982220  350492 ssh_runner.go:195] Run: containerd --version
	I0229 02:07:32.024567  350492 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 02:07:31.784070  349342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:31.784676  349342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:31.784677  349342 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:07:31.784694  349342 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:07:31.784713  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHHostname
	I0229 02:07:31.788853  349342 main.go:141] libmachine: (flannel-704272) DBG | domain flannel-704272 has defined MAC address 52:54:00:fc:f6:44 in network mk-flannel-704272
	I0229 02:07:31.791815  349342 main.go:141] libmachine: (flannel-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f6:44", ip: ""} in network mk-flannel-704272: {Iface:virbr1 ExpiryTime:2024-02-29 03:06:50 +0000 UTC Type:0 Mac:52:54:00:fc:f6:44 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:flannel-704272 Clientid:01:52:54:00:fc:f6:44}
	I0229 02:07:31.791842  349342 main.go:141] libmachine: (flannel-704272) DBG | domain flannel-704272 has defined IP address 192.168.61.53 and MAC address 52:54:00:fc:f6:44 in network mk-flannel-704272
	I0229 02:07:31.791997  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHPort
	I0229 02:07:31.792133  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHKeyPath
	I0229 02:07:31.792236  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHUsername
	I0229 02:07:31.792321  349342 sshutil.go:53] new ssh client: &{IP:192.168.61.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/flannel-704272/id_rsa Username:docker}
	I0229 02:07:31.807610  349342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0229 02:07:31.808088  349342 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:31.808651  349342 main.go:141] libmachine: Using API Version  1
	I0229 02:07:31.808674  349342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:31.809197  349342 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:31.809394  349342 main.go:141] libmachine: (flannel-704272) Calling .GetState
	I0229 02:07:31.811304  349342 main.go:141] libmachine: (flannel-704272) Calling .DriverName
	I0229 02:07:31.811637  349342 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:07:31.811654  349342 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:07:31.811673  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHHostname
	I0229 02:07:31.814972  349342 main.go:141] libmachine: (flannel-704272) DBG | domain flannel-704272 has defined MAC address 52:54:00:fc:f6:44 in network mk-flannel-704272
	I0229 02:07:31.815391  349342 main.go:141] libmachine: (flannel-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f6:44", ip: ""} in network mk-flannel-704272: {Iface:virbr1 ExpiryTime:2024-02-29 03:06:50 +0000 UTC Type:0 Mac:52:54:00:fc:f6:44 Iaid: IPaddr:192.168.61.53 Prefix:24 Hostname:flannel-704272 Clientid:01:52:54:00:fc:f6:44}
	I0229 02:07:31.815421  349342 main.go:141] libmachine: (flannel-704272) DBG | domain flannel-704272 has defined IP address 192.168.61.53 and MAC address 52:54:00:fc:f6:44 in network mk-flannel-704272
	I0229 02:07:31.815588  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHPort
	I0229 02:07:31.815767  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHKeyPath
	I0229 02:07:31.815929  349342 main.go:141] libmachine: (flannel-704272) Calling .GetSSHUsername
	I0229 02:07:31.816044  349342 sshutil.go:53] new ssh client: &{IP:192.168.61.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/flannel-704272/id_rsa Username:docker}
	I0229 02:07:31.885521  349342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:07:31.974830  349342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:07:32.035093  349342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:07:32.325831  349342 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-704272" context rescaled to 1 replicas
	I0229 02:07:32.325878  349342 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.53 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:07:32.329184  349342 out.go:177] * Verifying Kubernetes components...
	I0229 02:07:30.091507  350824 provision.go:86] duration metric: configureAuth took 466.479307ms
	I0229 02:07:30.091539  350824 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:07:30.091738  350824 config.go:182] Loaded profile config "kubernetes-upgrade-335938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 02:07:30.091755  350824 machine.go:91] provisioned docker machine in 732.642049ms
	I0229 02:07:30.091767  350824 start.go:300] post-start starting for "kubernetes-upgrade-335938" (driver="kvm2")
	I0229 02:07:30.091783  350824 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:07:30.091822  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:30.092157  350824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:07:30.092188  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:30.095113  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.095491  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:30.095515  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.095692  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:30.095927  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:30.096127  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:30.096293  350824 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:07:30.188796  350824 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:07:30.196396  350824 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:07:30.196430  350824 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:07:30.196511  350824 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:07:30.196604  350824 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:07:30.196700  350824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:07:30.213240  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:07:30.246684  350824 start.go:303] post-start completed in 154.899284ms
	I0229 02:07:30.246716  350824 fix.go:56] fixHost completed within 914.936876ms
	I0229 02:07:30.246743  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:30.249608  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.250105  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:30.250138  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.250306  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:30.250549  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:30.250748  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:30.250966  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:30.251180  350824 main.go:141] libmachine: Using SSH client type: native
	I0229 02:07:30.251402  350824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0229 02:07:30.251419  350824 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:07:30.372287  350824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172450.368639510
	
	I0229 02:07:30.372311  350824 fix.go:206] guest clock: 1709172450.368639510
	I0229 02:07:30.372379  350824 fix.go:219] Guest: 2024-02-29 02:07:30.36863951 +0000 UTC Remote: 2024-02-29 02:07:30.246719867 +0000 UTC m=+35.227207819 (delta=121.919643ms)
	I0229 02:07:30.372423  350824 fix.go:190] guest clock delta is within tolerance: 121.919643ms
	I0229 02:07:30.372431  350824 start.go:83] releasing machines lock for "kubernetes-upgrade-335938", held for 1.04068797s
	I0229 02:07:30.372473  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:30.372913  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetIP
	I0229 02:07:30.376200  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.376682  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:30.376741  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.376870  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:30.377496  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:30.377742  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:30.377950  350824 ssh_runner.go:195] Run: cat /version.json
	I0229 02:07:30.377980  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:30.378032  350824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:07:30.378116  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:30.381626  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.382023  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.382252  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:30.382324  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.382627  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:30.382759  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:30.382727  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:30.382978  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:30.383214  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:30.383288  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:30.383357  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:30.383438  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:30.383486  350824 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:07:30.383600  350824 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:07:30.505761  350824 ssh_runner.go:195] Run: systemctl --version
	I0229 02:07:30.517535  350824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:07:30.527892  350824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:07:30.527978  350824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:07:30.542762  350824 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 02:07:30.542788  350824 start.go:475] detecting cgroup driver to use...
	I0229 02:07:30.542878  350824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:07:30.568584  350824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:07:30.588769  350824 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:07:30.588842  350824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:07:30.608613  350824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:07:30.637798  350824 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:07:30.855759  350824 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:07:31.019142  350824 docker.go:233] disabling docker service ...
	I0229 02:07:31.019221  350824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:07:31.043974  350824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:07:31.062189  350824 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:07:31.221532  350824 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:07:31.370938  350824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:07:31.390811  350824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:07:31.422385  350824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:07:31.444342  350824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:07:31.463726  350824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:07:31.463825  350824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:07:31.483784  350824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:07:31.502237  350824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:07:31.517003  350824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:07:31.530609  350824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:07:31.547114  350824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:07:31.560957  350824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:07:31.575703  350824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:07:31.588336  350824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:07:31.774302  350824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:07:31.813898  350824 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:07:31.813962  350824 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:07:31.822741  350824 retry.go:31] will retry after 1.204414453s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:07:33.027956  350824 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:07:33.037872  350824 start.go:543] Will wait 60s for crictl version
	I0229 02:07:33.037951  350824 ssh_runner.go:195] Run: which crictl
	I0229 02:07:33.062712  350824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:07:33.161291  350824 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:07:33.161369  350824 ssh_runner.go:195] Run: containerd --version
	I0229 02:07:33.232950  350824 ssh_runner.go:195] Run: containerd --version
	I0229 02:07:33.299630  350824 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on containerd 1.7.11 ...
	I0229 02:07:32.026585  350492 main.go:141] libmachine: (enable-default-cni-704272) Calling .GetIP
	I0229 02:07:32.029991  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:32.030448  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:15:66", ip: ""} in network mk-enable-default-cni-704272: {Iface:virbr3 ExpiryTime:2024-02-29 03:07:16 +0000 UTC Type:0 Mac:52:54:00:58:15:66 Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:enable-default-cni-704272 Clientid:01:52:54:00:58:15:66}
	I0229 02:07:32.030477  350492 main.go:141] libmachine: (enable-default-cni-704272) DBG | domain enable-default-cni-704272 has defined IP address 192.168.72.111 and MAC address 52:54:00:58:15:66 in network mk-enable-default-cni-704272
	I0229 02:07:32.030658  350492 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 02:07:32.036958  350492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:07:32.056972  350492 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:07:32.057061  350492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:07:32.128229  350492 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:07:32.128417  350492 ssh_runner.go:195] Run: which lz4
	I0229 02:07:32.135215  350492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:07:32.142384  350492 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:07:32.142425  350492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0229 02:07:32.330448  349342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:07:33.269434  349342 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.383858842s)
	I0229 02:07:33.269469  349342 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:07:34.504175  349342 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.52928763s)
	I0229 02:07:34.504235  349342 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:34.504249  349342 main.go:141] libmachine: (flannel-704272) Calling .Close
	I0229 02:07:34.504340  349342 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.46922091s)
	I0229 02:07:34.504358  349342 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:34.504365  349342 main.go:141] libmachine: (flannel-704272) Calling .Close
	I0229 02:07:34.504422  349342 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.173948478s)
	I0229 02:07:34.505722  349342 node_ready.go:35] waiting up to 15m0s for node "flannel-704272" to be "Ready" ...
	I0229 02:07:34.505950  349342 main.go:141] libmachine: (flannel-704272) DBG | Closing plugin on server side
	I0229 02:07:34.505971  349342 main.go:141] libmachine: (flannel-704272) DBG | Closing plugin on server side
	I0229 02:07:34.505988  349342 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:34.506007  349342 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:34.506024  349342 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:34.506032  349342 main.go:141] libmachine: (flannel-704272) Calling .Close
	I0229 02:07:34.506034  349342 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:34.506054  349342 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:34.506089  349342 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:34.506099  349342 main.go:141] libmachine: (flannel-704272) Calling .Close
	I0229 02:07:34.506708  349342 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:34.506715  349342 main.go:141] libmachine: (flannel-704272) DBG | Closing plugin on server side
	I0229 02:07:34.506723  349342 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:34.506884  349342 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:34.506895  349342 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:34.523141  349342 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:34.523164  349342 main.go:141] libmachine: (flannel-704272) Calling .Close
	I0229 02:07:34.523473  349342 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:34.523491  349342 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:34.523508  349342 main.go:141] libmachine: (flannel-704272) DBG | Closing plugin on server side
	I0229 02:07:34.525143  349342 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 02:07:33.300783  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetIP
	I0229 02:07:33.304392  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:33.304924  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:33.304969  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:33.305197  350824 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:07:33.316481  350824 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 02:07:33.316569  350824 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:07:33.412604  350824 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:07:33.412644  350824 containerd.go:519] Images already preloaded, skipping extraction
	I0229 02:07:33.412714  350824 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:07:33.505362  350824 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:07:33.505386  350824 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:07:33.505447  350824 ssh_runner.go:195] Run: sudo crictl info
	I0229 02:07:33.572185  350824 cni.go:84] Creating CNI manager for ""
	I0229 02:07:33.572209  350824 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:07:33.572228  350824 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:07:33.572247  350824 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-335938 NodeName:kubernetes-upgrade-335938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube
/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:07:33.572409  350824 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-335938"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:07:33.572521  350824 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-335938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-335938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:07:33.572577  350824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:07:33.587447  350824 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:07:33.587522  350824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:07:33.615146  350824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (403 bytes)
	I0229 02:07:33.651908  350824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:07:33.688399  350824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2122 bytes)
	I0229 02:07:33.746326  350824 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I0229 02:07:33.768438  350824 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938 for IP: 192.168.50.62
	I0229 02:07:33.768487  350824 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:07:33.768678  350824 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 02:07:33.768731  350824 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 02:07:33.768809  350824 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.key
	I0229 02:07:33.768861  350824 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key.568d448d
	I0229 02:07:33.768929  350824 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.key
	I0229 02:07:33.769059  350824 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 02:07:33.769089  350824 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 02:07:33.769100  350824 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:07:33.769121  350824 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:07:33.769152  350824 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:07:33.769173  350824 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 02:07:33.769218  350824 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:07:33.769827  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:07:33.883068  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:07:33.929471  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:07:33.971323  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:07:34.022441  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:07:34.108191  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:07:34.158722  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:07:34.202045  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:07:34.249301  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 02:07:34.298292  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 02:07:34.369651  350824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:07:34.430862  350824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:07:34.475940  350824 ssh_runner.go:195] Run: openssl version
	I0229 02:07:34.486361  350824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 02:07:34.525527  350824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 02:07:34.532863  350824 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 02:07:34.532924  350824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 02:07:34.540210  350824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 02:07:34.555746  350824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 02:07:34.569726  350824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 02:07:34.575665  350824 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 02:07:34.575728  350824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 02:07:34.585179  350824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:07:34.597090  350824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:07:34.611606  350824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:07:34.617824  350824 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:07:34.617916  350824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:07:34.628220  350824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:07:34.642716  350824 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:07:34.651547  350824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:07:34.659662  350824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:07:34.668691  350824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:07:34.677982  350824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:07:34.688829  350824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:07:34.699221  350824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:07:34.711444  350824 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-335938 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-335938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:07:34.711583  350824 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 02:07:34.711684  350824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:07:34.785249  350824 cri.go:89] found id: "411140276b679b2e9741606990dee6566b35d160ca11d293df22e8b4b1d27a33"
	I0229 02:07:34.785277  350824 cri.go:89] found id: "a0123892ecea132a6137daf45e3b7e773957dd6395ef0dac9d9787c0380cc924"
	I0229 02:07:34.785283  350824 cri.go:89] found id: "78c483812ce39f712a5772be326e575fb6a3d525dd9e32f122598ce511212555"
	I0229 02:07:34.785287  350824 cri.go:89] found id: "90448f2a0e6b9b45faa862f388769edbeee26fc729f76ebb9ac2195f91b6cf9e"
	I0229 02:07:34.785292  350824 cri.go:89] found id: "1e6163c9790a827bb2ae2062c3d63338adc05380d53777496c8b4b4d3e42019c"
	I0229 02:07:34.785296  350824 cri.go:89] found id: "74b26acdc623c943ac1a21bfd9474dc6b38509bd7bd5998706a787dba2965663"
	I0229 02:07:34.785300  350824 cri.go:89] found id: "9fafe8794413f107758942a1854756857bd6f0171f9e2b0e9fc0747650377c33"
	I0229 02:07:34.785304  350824 cri.go:89] found id: ""
	I0229 02:07:34.785359  350824 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0229 02:07:34.825091  350824 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1e6163c9790a827bb2ae2062c3d63338adc05380d53777496c8b4b4d3e42019c","pid":1066,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e6163c9790a827bb2ae2062c3d63338adc05380d53777496c8b4b4d3e42019c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e6163c9790a827bb2ae2062c3d63338adc05380d53777496c8b4b4d3e42019c/rootfs","created":"2024-02-29T02:06:44.770689375Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2","io.kubernetes.cri.sandbox-id":"d17ab8747372b6e9a63b8e96bae2f92f462ce1da05a88416bf69cd70b67d80ae","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9e8f376d2a31eb70ad6df6920e5490f3"},"owner":"root"},{
"ociVersion":"1.0.2-dev","id":"411140276b679b2e9741606990dee6566b35d160ca11d293df22e8b4b1d27a33","pid":1528,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411140276b679b2e9741606990dee6566b35d160ca11d293df22e8b4b1d27a33","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411140276b679b2e9741606990dee6566b35d160ca11d293df22e8b4b1d27a33/rootfs","created":"2024-02-29T02:07:06.652405863Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri.sandbox-id":"e40a138046e5ab5aa2e5eae50edc05f0ed0f0098705b041dc3a45710bccbecd7","io.kubernetes.cri.sandbox-name":"coredns-76f75df574-ck24q","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a0ead32e04f139a2459936a27f000f2d3c845b1f96c942d6df1aee10b5e656b","pid":924,"st
atus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a0ead32e04f139a2459936a27f000f2d3c845b1f96c942d6df1aee10b5e656b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a0ead32e04f139a2459936a27f000f2d3c845b1f96c942d6df1aee10b5e656b/rootfs","created":"2024-02-29T02:06:44.310600973Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5a0ead32e04f139a2459936a27f000f2d3c845b1f96c942d6df1aee10b5e656b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-335938_ae1dc9dc76ad8f66321c6852b6648f3f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ae1dc9dc76ad8f66321c6852b6648f3f"},"owner":"root"},{"ociVersion
":"1.0.2-dev","id":"74b26acdc623c943ac1a21bfd9474dc6b38509bd7bd5998706a787dba2965663","pid":1054,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74b26acdc623c943ac1a21bfd9474dc6b38509bd7bd5998706a787dba2965663","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74b26acdc623c943ac1a21bfd9474dc6b38509bd7bd5998706a787dba2965663/rootfs","created":"2024-02-29T02:06:44.739453553Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.29.0-rc.2","io.kubernetes.cri.sandbox-id":"5a0ead32e04f139a2459936a27f000f2d3c845b1f96c942d6df1aee10b5e656b","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ae1dc9dc76ad8f66321c6852b6648f3f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82de0e80964a0a1123b1ea02dc992b5c8286e43c3ac417a6aadc552ff8c209fc","
pid":933,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82de0e80964a0a1123b1ea02dc992b5c8286e43c3ac417a6aadc552ff8c209fc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82de0e80964a0a1123b1ea02dc992b5c8286e43c3ac417a6aadc552ff8c209fc/rootfs","created":"2024-02-29T02:06:44.386614995Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"82de0e80964a0a1123b1ea02dc992b5c8286e43c3ac417a6aadc552ff8c209fc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-335938_0db204663f6cbd136bf1dea4a2b53af2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0db204663f6cbd136bf1dea4a2b53af2"},"owner":"root"},
{"ociVersion":"1.0.2-dev","id":"90448f2a0e6b9b45faa862f388769edbeee26fc729f76ebb9ac2195f91b6cf9e","pid":1083,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90448f2a0e6b9b45faa862f388769edbeee26fc729f76ebb9ac2195f91b6cf9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90448f2a0e6b9b45faa862f388769edbeee26fc729f76ebb9ac2195f91b6cf9e/rootfs","created":"2024-02-29T02:06:44.935990609Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.29.0-rc.2","io.kubernetes.cri.sandbox-id":"82de0e80964a0a1123b1ea02dc992b5c8286e43c3ac417a6aadc552ff8c209fc","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0db204663f6cbd136bf1dea4a2b53af2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e
17c76c9de","pid":1222,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e17c76c9de","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e17c76c9de/rootfs","created":"2024-02-29T02:07:04.071295693Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e17c76c9de","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_a6006131-44fc-4185-884a-ce0d353924e0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a6006131-44fc-4185-884a-ce0d353924e0"},"owner":"root"},{"ociVersion":"1.0.2-de
v","id":"9f04a84968eb41d156ae5f2cd5a90e281b0e1342064f2c69b8fb1963fcecb6c1","pid":877,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f04a84968eb41d156ae5f2cd5a90e281b0e1342064f2c69b8fb1963fcecb6c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f04a84968eb41d156ae5f2cd5a90e281b0e1342064f2c69b8fb1963fcecb6c1/rootfs","created":"2024-02-29T02:06:44.24190899Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9f04a84968eb41d156ae5f2cd5a90e281b0e1342064f2c69b8fb1963fcecb6c1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-335938_8867b2a1c8f23e8013b14e7c9eb8abf2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-ui
d":"8867b2a1c8f23e8013b14e7c9eb8abf2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9fafe8794413f107758942a1854756857bd6f0171f9e2b0e9fc0747650377c33","pid":994,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fafe8794413f107758942a1854756857bd6f0171f9e2b0e9fc0747650377c33","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fafe8794413f107758942a1854756857bd6f0171f9e2b0e9fc0747650377c33/rootfs","created":"2024-02-29T02:06:44.534793451Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.10-0","io.kubernetes.cri.sandbox-id":"9f04a84968eb41d156ae5f2cd5a90e281b0e1342064f2c69b8fb1963fcecb6c1","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8867b2a1c8f23e8013b14e7c9eb8abf2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0123892ecea132a6137daf45e3b7e77395
7dd6395ef0dac9d9787c0380cc924","pid":1330,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0123892ecea132a6137daf45e3b7e773957dd6395ef0dac9d9787c0380cc924","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0123892ecea132a6137daf45e3b7e773957dd6395ef0dac9d9787c0380cc924/rootfs","created":"2024-02-29T02:07:04.376938392Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.29.0-rc.2","io.kubernetes.cri.sandbox-id":"bae80bb9f8bf0c303820ae8843bff87b58141f1a6ca7b99f5ed06c6f46643fa9","io.kubernetes.cri.sandbox-name":"kube-proxy-6862w","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"12cfe8f8-17f3-4d52-a68b-e50cbfc16f74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bae80bb9f8bf0c303820ae8843bff87b58141f1a6ca7b99f5ed06c6f46643fa9","pid":1264,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/bae80bb9f8bf0c303820ae8843bff87b58141f1a6ca7b99f5ed06c6f46643fa9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bae80bb9f8bf0c303820ae8843bff87b58141f1a6ca7b99f5ed06c6f46643fa9/rootfs","created":"2024-02-29T02:07:04.205294017Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"bae80bb9f8bf0c303820ae8843bff87b58141f1a6ca7b99f5ed06c6f46643fa9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-6862w_12cfe8f8-17f3-4d52-a68b-e50cbfc16f74","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-6862w","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"12cfe8f8-17f3-4d52-a68b-e50cbfc16f74"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d17ab8747372b6e9a63b8e96bae2f92f462ce1da05a88416bf69cd70b67d80ae","pid":935,"status":"runnin
g","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d17ab8747372b6e9a63b8e96bae2f92f462ce1da05a88416bf69cd70b67d80ae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d17ab8747372b6e9a63b8e96bae2f92f462ce1da05a88416bf69cd70b67d80ae/rootfs","created":"2024-02-29T02:06:44.335513237Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"d17ab8747372b6e9a63b8e96bae2f92f462ce1da05a88416bf69cd70b67d80ae","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-335938_9e8f376d2a31eb70ad6df6920e5490f3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-335938","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9e8f376d2a31eb70ad6df6920e5490f3"},"owner":"root"},{"ociVe
rsion":"1.0.2-dev","id":"e40a138046e5ab5aa2e5eae50edc05f0ed0f0098705b041dc3a45710bccbecd7","pid":1497,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40a138046e5ab5aa2e5eae50edc05f0ed0f0098705b041dc3a45710bccbecd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40a138046e5ab5aa2e5eae50edc05f0ed0f0098705b041dc3a45710bccbecd7/rootfs","created":"2024-02-29T02:07:05.268156998Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e40a138046e5ab5aa2e5eae50edc05f0ed0f0098705b041dc3a45710bccbecd7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-76f75df574-ck24q_b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-76f75df574-ck24q","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kuberne
tes.cri.sandbox-uid":"b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3"},"owner":"root"}]
	I0229 02:07:34.825403  350824 cri.go:126] list returned 13 containers
	I0229 02:07:34.825425  350824 cri.go:129] container: {ID:1e6163c9790a827bb2ae2062c3d63338adc05380d53777496c8b4b4d3e42019c Status:running}
	I0229 02:07:34.825446  350824 cri.go:135] skipping {1e6163c9790a827bb2ae2062c3d63338adc05380d53777496c8b4b4d3e42019c running}: state = "running", want "paused"
	I0229 02:07:34.825457  350824 cri.go:129] container: {ID:411140276b679b2e9741606990dee6566b35d160ca11d293df22e8b4b1d27a33 Status:running}
	I0229 02:07:34.825467  350824 cri.go:135] skipping {411140276b679b2e9741606990dee6566b35d160ca11d293df22e8b4b1d27a33 running}: state = "running", want "paused"
	I0229 02:07:34.825472  350824 cri.go:129] container: {ID:5a0ead32e04f139a2459936a27f000f2d3c845b1f96c942d6df1aee10b5e656b Status:running}
	I0229 02:07:34.825481  350824 cri.go:131] skipping 5a0ead32e04f139a2459936a27f000f2d3c845b1f96c942d6df1aee10b5e656b - not in ps
	I0229 02:07:34.825491  350824 cri.go:129] container: {ID:74b26acdc623c943ac1a21bfd9474dc6b38509bd7bd5998706a787dba2965663 Status:running}
	I0229 02:07:34.825498  350824 cri.go:135] skipping {74b26acdc623c943ac1a21bfd9474dc6b38509bd7bd5998706a787dba2965663 running}: state = "running", want "paused"
	I0229 02:07:34.825506  350824 cri.go:129] container: {ID:82de0e80964a0a1123b1ea02dc992b5c8286e43c3ac417a6aadc552ff8c209fc Status:running}
	I0229 02:07:34.825514  350824 cri.go:131] skipping 82de0e80964a0a1123b1ea02dc992b5c8286e43c3ac417a6aadc552ff8c209fc - not in ps
	I0229 02:07:34.825522  350824 cri.go:129] container: {ID:90448f2a0e6b9b45faa862f388769edbeee26fc729f76ebb9ac2195f91b6cf9e Status:running}
	I0229 02:07:34.825531  350824 cri.go:135] skipping {90448f2a0e6b9b45faa862f388769edbeee26fc729f76ebb9ac2195f91b6cf9e running}: state = "running", want "paused"
	I0229 02:07:34.825540  350824 cri.go:129] container: {ID:9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e17c76c9de Status:running}
	I0229 02:07:34.825548  350824 cri.go:131] skipping 9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e17c76c9de - not in ps
	I0229 02:07:34.825560  350824 cri.go:129] container: {ID:9f04a84968eb41d156ae5f2cd5a90e281b0e1342064f2c69b8fb1963fcecb6c1 Status:running}
	I0229 02:07:34.825566  350824 cri.go:131] skipping 9f04a84968eb41d156ae5f2cd5a90e281b0e1342064f2c69b8fb1963fcecb6c1 - not in ps
	I0229 02:07:34.825575  350824 cri.go:129] container: {ID:9fafe8794413f107758942a1854756857bd6f0171f9e2b0e9fc0747650377c33 Status:running}
	I0229 02:07:34.825584  350824 cri.go:135] skipping {9fafe8794413f107758942a1854756857bd6f0171f9e2b0e9fc0747650377c33 running}: state = "running", want "paused"
	I0229 02:07:34.825603  350824 cri.go:129] container: {ID:a0123892ecea132a6137daf45e3b7e773957dd6395ef0dac9d9787c0380cc924 Status:running}
	I0229 02:07:34.825614  350824 cri.go:135] skipping {a0123892ecea132a6137daf45e3b7e773957dd6395ef0dac9d9787c0380cc924 running}: state = "running", want "paused"
	I0229 02:07:34.825623  350824 cri.go:129] container: {ID:bae80bb9f8bf0c303820ae8843bff87b58141f1a6ca7b99f5ed06c6f46643fa9 Status:running}
	I0229 02:07:34.825640  350824 cri.go:131] skipping bae80bb9f8bf0c303820ae8843bff87b58141f1a6ca7b99f5ed06c6f46643fa9 - not in ps
	I0229 02:07:34.825648  350824 cri.go:129] container: {ID:d17ab8747372b6e9a63b8e96bae2f92f462ce1da05a88416bf69cd70b67d80ae Status:running}
	I0229 02:07:34.825656  350824 cri.go:131] skipping d17ab8747372b6e9a63b8e96bae2f92f462ce1da05a88416bf69cd70b67d80ae - not in ps
	I0229 02:07:34.825664  350824 cri.go:129] container: {ID:e40a138046e5ab5aa2e5eae50edc05f0ed0f0098705b041dc3a45710bccbecd7 Status:running}
	I0229 02:07:34.825669  350824 cri.go:131] skipping e40a138046e5ab5aa2e5eae50edc05f0ed0f0098705b041dc3a45710bccbecd7 - not in ps
	I0229 02:07:34.825742  350824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:07:34.842253  350824 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:07:34.842287  350824 kubeadm.go:636] restartCluster start
	I0229 02:07:34.842351  350824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:07:34.857893  350824 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:07:34.858636  350824 kubeconfig.go:92] found "kubernetes-upgrade-335938" server: "https://192.168.50.62:8443"
	I0229 02:07:34.859591  350824 kapi.go:59] client config for kubernetes-upgrade-335938: &rest.Config{Host:"https://192.168.50.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.key", CAFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:07:34.860245  350824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:07:34.876042  350824 api_server.go:166] Checking apiserver status ...
	I0229 02:07:34.876122  350824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:07:34.898298  350824 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1054/cgroup
	W0229 02:07:34.913703  350824 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1054/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:07:34.913793  350824 ssh_runner.go:195] Run: ls
	I0229 02:07:34.921040  350824 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0229 02:07:34.927502  350824 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0229 02:07:34.942702  350824 system_pods.go:86] 7 kube-system pods found
	I0229 02:07:34.942743  350824 system_pods.go:89] "coredns-76f75df574-ck24q" [b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3] Running
	I0229 02:07:34.942752  350824 system_pods.go:89] "etcd-kubernetes-upgrade-335938" [d2489697-350d-4332-a7c7-cf5dec7cd145] Running
	I0229 02:07:34.942760  350824 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-335938" [9b14bac8-32b8-4101-b100-d5eaac2140b0] Running
	I0229 02:07:34.942773  350824 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-335938" [13fe97cd-13dd-4429-a397-6ad3380c9673] Running
	I0229 02:07:34.942780  350824 system_pods.go:89] "kube-proxy-6862w" [12cfe8f8-17f3-4d52-a68b-e50cbfc16f74] Running
	I0229 02:07:34.942787  350824 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-335938" [cf7861d4-def2-4b94-9d50-7129f74d651b] Running
	I0229 02:07:34.942798  350824 system_pods.go:89] "storage-provisioner" [a6006131-44fc-4185-884a-ce0d353924e0] Running
	I0229 02:07:34.944143  350824 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:07:34.944165  350824 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I0229 02:07:34.944172  350824 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0229 02:07:34.944180  350824 kubeadm.go:640] restartCluster took 101.885007ms
	I0229 02:07:34.944187  350824 kubeadm.go:406] StartCluster complete in 232.756292ms
	I0229 02:07:34.944206  350824 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:07:34.944283  350824 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:07:34.945202  350824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:07:34.945438  350824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:07:34.945512  350824 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:07:34.945604  350824 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-335938"
	I0229 02:07:34.945624  350824 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-335938"
	I0229 02:07:34.945624  350824 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-335938"
	W0229 02:07:34.945632  350824 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:07:34.945644  350824 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-335938"
	I0229 02:07:34.945675  350824 host.go:66] Checking if "kubernetes-upgrade-335938" exists ...
	I0229 02:07:34.945687  350824 config.go:182] Loaded profile config "kubernetes-upgrade-335938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 02:07:34.946122  350824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:34.946122  350824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:34.946156  350824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:34.946332  350824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:34.946327  350824 kapi.go:59] client config for kubernetes-upgrade-335938: &rest.Config{Host:"https://192.168.50.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.key", CAFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:07:34.952219  350824 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-335938" context rescaled to 1 replicas
	I0229 02:07:34.952288  350824 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:07:34.955250  350824 out.go:177] * Verifying Kubernetes components...
	I0229 02:07:34.956685  350824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:07:34.967480  350824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0229 02:07:34.967668  350824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0229 02:07:34.968075  350824 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:34.968121  350824 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:34.968686  350824 main.go:141] libmachine: Using API Version  1
	I0229 02:07:34.968705  350824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:34.968833  350824 main.go:141] libmachine: Using API Version  1
	I0229 02:07:34.968861  350824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:34.969236  350824 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:34.969291  350824 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:34.969509  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetState
	I0229 02:07:34.969879  350824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:34.969908  350824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:34.972527  350824 kapi.go:59] client config for kubernetes-upgrade-335938: &rest.Config{Host:"https://192.168.50.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kubernetes-upgrade-335938/client.key", CAFile:"/home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:07:34.972771  350824 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-335938"
	W0229 02:07:34.972779  350824 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:07:34.972800  350824 host.go:66] Checking if "kubernetes-upgrade-335938" exists ...
	I0229 02:07:34.973057  350824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:34.973082  350824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:34.988641  350824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0229 02:07:34.989209  350824 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:34.989799  350824 main.go:141] libmachine: Using API Version  1
	I0229 02:07:34.989815  350824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:34.990276  350824 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:34.990763  350824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:07:34.990793  350824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:07:34.991210  350824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0229 02:07:34.991654  350824 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:34.992267  350824 main.go:141] libmachine: Using API Version  1
	I0229 02:07:34.992290  350824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:34.992623  350824 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:34.992752  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetState
	I0229 02:07:34.998263  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:35.000861  350824 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:07:35.002463  350824 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:07:35.002487  350824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:07:35.002509  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:35.006402  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:35.007088  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:35.007114  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:35.007319  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:35.007519  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:35.007654  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:35.007784  350824 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:07:35.008256  350824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I0229 02:07:35.008631  350824 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:07:35.009215  350824 main.go:141] libmachine: Using API Version  1
	I0229 02:07:35.009234  350824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:07:35.010527  350824 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:07:35.010741  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetState
	I0229 02:07:35.012275  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .DriverName
	I0229 02:07:35.012515  350824 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:07:35.012530  350824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:07:35.012546  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHHostname
	I0229 02:07:35.015647  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:35.016109  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ea:4e", ip: ""} in network mk-kubernetes-upgrade-335938: {Iface:virbr2 ExpiryTime:2024-02-29 03:01:43 +0000 UTC Type:0 Mac:52:54:00:e1:ea:4e Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-335938 Clientid:01:52:54:00:e1:ea:4e}
	I0229 02:07:35.016134  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | domain kubernetes-upgrade-335938 has defined IP address 192.168.50.62 and MAC address 52:54:00:e1:ea:4e in network mk-kubernetes-upgrade-335938
	I0229 02:07:35.016440  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHPort
	I0229 02:07:35.016584  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHKeyPath
	I0229 02:07:35.016706  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .GetSSHUsername
	I0229 02:07:35.016827  350824 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/kubernetes-upgrade-335938/id_rsa Username:docker}
	I0229 02:07:31.241767  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:31.241628  352575 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272/bridge-704272.rawdisk...
	I0229 02:07:31.241806  352533 main.go:141] libmachine: (bridge-704272) DBG | Writing magic tar header
	I0229 02:07:31.241830  352533 main.go:141] libmachine: (bridge-704272) DBG | Writing SSH key tar header
	I0229 02:07:31.241847  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:31.241755  352575 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272 ...
	I0229 02:07:31.242000  352533 main.go:141] libmachine: (bridge-704272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272
	I0229 02:07:31.242055  352533 main.go:141] libmachine: (bridge-704272) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272 (perms=drwx------)
	I0229 02:07:31.242090  352533 main.go:141] libmachine: (bridge-704272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines
	I0229 02:07:31.242107  352533 main.go:141] libmachine: (bridge-704272) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines (perms=drwxr-xr-x)
	I0229 02:07:31.242119  352533 main.go:141] libmachine: (bridge-704272) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube (perms=drwxr-xr-x)
	I0229 02:07:31.242133  352533 main.go:141] libmachine: (bridge-704272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:07:31.242148  352533 main.go:141] libmachine: (bridge-704272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085
	I0229 02:07:31.242158  352533 main.go:141] libmachine: (bridge-704272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 02:07:31.242169  352533 main.go:141] libmachine: (bridge-704272) DBG | Checking permissions on dir: /home/jenkins
	I0229 02:07:31.242177  352533 main.go:141] libmachine: (bridge-704272) DBG | Checking permissions on dir: /home
	I0229 02:07:31.242193  352533 main.go:141] libmachine: (bridge-704272) DBG | Skipping /home - not owner
	I0229 02:07:31.242218  352533 main.go:141] libmachine: (bridge-704272) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085 (perms=drwxrwxr-x)
	I0229 02:07:31.242234  352533 main.go:141] libmachine: (bridge-704272) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 02:07:31.242245  352533 main.go:141] libmachine: (bridge-704272) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 02:07:31.242257  352533 main.go:141] libmachine: (bridge-704272) Creating domain...
	I0229 02:07:31.243397  352533 main.go:141] libmachine: (bridge-704272) define libvirt domain using xml: 
	I0229 02:07:31.243422  352533 main.go:141] libmachine: (bridge-704272) <domain type='kvm'>
	I0229 02:07:31.243434  352533 main.go:141] libmachine: (bridge-704272)   <name>bridge-704272</name>
	I0229 02:07:31.243446  352533 main.go:141] libmachine: (bridge-704272)   <memory unit='MiB'>3072</memory>
	I0229 02:07:31.243456  352533 main.go:141] libmachine: (bridge-704272)   <vcpu>2</vcpu>
	I0229 02:07:31.243463  352533 main.go:141] libmachine: (bridge-704272)   <features>
	I0229 02:07:31.243476  352533 main.go:141] libmachine: (bridge-704272)     <acpi/>
	I0229 02:07:31.243487  352533 main.go:141] libmachine: (bridge-704272)     <apic/>
	I0229 02:07:31.243495  352533 main.go:141] libmachine: (bridge-704272)     <pae/>
	I0229 02:07:31.243501  352533 main.go:141] libmachine: (bridge-704272)     
	I0229 02:07:31.243509  352533 main.go:141] libmachine: (bridge-704272)   </features>
	I0229 02:07:31.243522  352533 main.go:141] libmachine: (bridge-704272)   <cpu mode='host-passthrough'>
	I0229 02:07:31.243530  352533 main.go:141] libmachine: (bridge-704272)   
	I0229 02:07:31.243539  352533 main.go:141] libmachine: (bridge-704272)   </cpu>
	I0229 02:07:31.243550  352533 main.go:141] libmachine: (bridge-704272)   <os>
	I0229 02:07:31.243587  352533 main.go:141] libmachine: (bridge-704272)     <type>hvm</type>
	I0229 02:07:31.243614  352533 main.go:141] libmachine: (bridge-704272)     <boot dev='cdrom'/>
	I0229 02:07:31.243626  352533 main.go:141] libmachine: (bridge-704272)     <boot dev='hd'/>
	I0229 02:07:31.243636  352533 main.go:141] libmachine: (bridge-704272)     <bootmenu enable='no'/>
	I0229 02:07:31.243665  352533 main.go:141] libmachine: (bridge-704272)   </os>
	I0229 02:07:31.243681  352533 main.go:141] libmachine: (bridge-704272)   <devices>
	I0229 02:07:31.243693  352533 main.go:141] libmachine: (bridge-704272)     <disk type='file' device='cdrom'>
	I0229 02:07:31.243710  352533 main.go:141] libmachine: (bridge-704272)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272/boot2docker.iso'/>
	I0229 02:07:31.243720  352533 main.go:141] libmachine: (bridge-704272)       <target dev='hdc' bus='scsi'/>
	I0229 02:07:31.243727  352533 main.go:141] libmachine: (bridge-704272)       <readonly/>
	I0229 02:07:31.243735  352533 main.go:141] libmachine: (bridge-704272)     </disk>
	I0229 02:07:31.243743  352533 main.go:141] libmachine: (bridge-704272)     <disk type='file' device='disk'>
	I0229 02:07:31.243753  352533 main.go:141] libmachine: (bridge-704272)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 02:07:31.243764  352533 main.go:141] libmachine: (bridge-704272)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/bridge-704272/bridge-704272.rawdisk'/>
	I0229 02:07:31.243773  352533 main.go:141] libmachine: (bridge-704272)       <target dev='hda' bus='virtio'/>
	I0229 02:07:31.243779  352533 main.go:141] libmachine: (bridge-704272)     </disk>
	I0229 02:07:31.243788  352533 main.go:141] libmachine: (bridge-704272)     <interface type='network'>
	I0229 02:07:31.243795  352533 main.go:141] libmachine: (bridge-704272)       <source network='mk-bridge-704272'/>
	I0229 02:07:31.243806  352533 main.go:141] libmachine: (bridge-704272)       <model type='virtio'/>
	I0229 02:07:31.243813  352533 main.go:141] libmachine: (bridge-704272)     </interface>
	I0229 02:07:31.243821  352533 main.go:141] libmachine: (bridge-704272)     <interface type='network'>
	I0229 02:07:31.243828  352533 main.go:141] libmachine: (bridge-704272)       <source network='default'/>
	I0229 02:07:31.243836  352533 main.go:141] libmachine: (bridge-704272)       <model type='virtio'/>
	I0229 02:07:31.243843  352533 main.go:141] libmachine: (bridge-704272)     </interface>
	I0229 02:07:31.243851  352533 main.go:141] libmachine: (bridge-704272)     <serial type='pty'>
	I0229 02:07:31.243857  352533 main.go:141] libmachine: (bridge-704272)       <target port='0'/>
	I0229 02:07:31.243864  352533 main.go:141] libmachine: (bridge-704272)     </serial>
	I0229 02:07:31.243871  352533 main.go:141] libmachine: (bridge-704272)     <console type='pty'>
	I0229 02:07:31.243879  352533 main.go:141] libmachine: (bridge-704272)       <target type='serial' port='0'/>
	I0229 02:07:31.243885  352533 main.go:141] libmachine: (bridge-704272)     </console>
	I0229 02:07:31.243894  352533 main.go:141] libmachine: (bridge-704272)     <rng model='virtio'>
	I0229 02:07:31.243903  352533 main.go:141] libmachine: (bridge-704272)       <backend model='random'>/dev/random</backend>
	I0229 02:07:31.243910  352533 main.go:141] libmachine: (bridge-704272)     </rng>
	I0229 02:07:31.243916  352533 main.go:141] libmachine: (bridge-704272)     
	I0229 02:07:31.243924  352533 main.go:141] libmachine: (bridge-704272)     
	I0229 02:07:31.243931  352533 main.go:141] libmachine: (bridge-704272)   </devices>
	I0229 02:07:31.243938  352533 main.go:141] libmachine: (bridge-704272) </domain>
	I0229 02:07:31.243944  352533 main.go:141] libmachine: (bridge-704272) 
	I0229 02:07:31.248487  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:92:49:2f in network default
	I0229 02:07:31.249164  352533 main.go:141] libmachine: (bridge-704272) Ensuring networks are active...
	I0229 02:07:31.249192  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:31.249974  352533 main.go:141] libmachine: (bridge-704272) Ensuring network default is active
	I0229 02:07:31.250389  352533 main.go:141] libmachine: (bridge-704272) Ensuring network mk-bridge-704272 is active
	I0229 02:07:31.251100  352533 main.go:141] libmachine: (bridge-704272) Getting domain xml...
	I0229 02:07:31.252160  352533 main.go:141] libmachine: (bridge-704272) Creating domain...
	I0229 02:07:33.045030  352533 main.go:141] libmachine: (bridge-704272) Waiting to get IP...
	I0229 02:07:33.046176  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:33.046846  352533 main.go:141] libmachine: (bridge-704272) DBG | unable to find current IP address of domain bridge-704272 in network mk-bridge-704272
	I0229 02:07:33.046868  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:33.046759  352575 retry.go:31] will retry after 199.280385ms: waiting for machine to come up
	I0229 02:07:33.247411  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:33.248260  352533 main.go:141] libmachine: (bridge-704272) DBG | unable to find current IP address of domain bridge-704272 in network mk-bridge-704272
	I0229 02:07:33.248292  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:33.248196  352575 retry.go:31] will retry after 376.215971ms: waiting for machine to come up
	I0229 02:07:33.625897  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:33.628562  352533 main.go:141] libmachine: (bridge-704272) DBG | unable to find current IP address of domain bridge-704272 in network mk-bridge-704272
	I0229 02:07:33.628592  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:33.628494  352575 retry.go:31] will retry after 293.197812ms: waiting for machine to come up
	I0229 02:07:33.923293  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:33.924112  352533 main.go:141] libmachine: (bridge-704272) DBG | unable to find current IP address of domain bridge-704272 in network mk-bridge-704272
	I0229 02:07:33.924129  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:33.924067  352575 retry.go:31] will retry after 461.04958ms: waiting for machine to come up
	I0229 02:07:34.386744  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:34.387404  352533 main.go:141] libmachine: (bridge-704272) DBG | unable to find current IP address of domain bridge-704272 in network mk-bridge-704272
	I0229 02:07:34.387430  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:34.387348  352575 retry.go:31] will retry after 486.675227ms: waiting for machine to come up
	I0229 02:07:34.876309  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:34.876965  352533 main.go:141] libmachine: (bridge-704272) DBG | unable to find current IP address of domain bridge-704272 in network mk-bridge-704272
	I0229 02:07:34.876997  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:34.876911  352575 retry.go:31] will retry after 802.499905ms: waiting for machine to come up
	I0229 02:07:35.680926  352533 main.go:141] libmachine: (bridge-704272) DBG | domain bridge-704272 has defined MAC address 52:54:00:da:1a:df in network mk-bridge-704272
	I0229 02:07:35.681474  352533 main.go:141] libmachine: (bridge-704272) DBG | unable to find current IP address of domain bridge-704272 in network mk-bridge-704272
	I0229 02:07:35.681509  352533 main.go:141] libmachine: (bridge-704272) DBG | I0229 02:07:35.681420  352575 retry.go:31] will retry after 719.014702ms: waiting for machine to come up
	I0229 02:07:35.188363  350824 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:07:35.188397  350824 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:07:35.188485  350824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:07:35.199960  350824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:07:35.247510  350824 api_server.go:72] duration metric: took 295.173623ms to wait for apiserver process to appear ...
	I0229 02:07:35.247545  350824 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:07:35.247571  350824 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0229 02:07:35.253857  350824 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0229 02:07:35.258426  350824 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:07:35.258456  350824 api_server.go:131] duration metric: took 10.904395ms to wait for apiserver health ...
	I0229 02:07:35.258464  350824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:07:35.268333  350824 system_pods.go:59] 7 kube-system pods found
	I0229 02:07:35.268371  350824 system_pods.go:61] "coredns-76f75df574-ck24q" [b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3] Running
	I0229 02:07:35.268378  350824 system_pods.go:61] "etcd-kubernetes-upgrade-335938" [d2489697-350d-4332-a7c7-cf5dec7cd145] Running
	I0229 02:07:35.268393  350824 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-335938" [9b14bac8-32b8-4101-b100-d5eaac2140b0] Running
	I0229 02:07:35.268401  350824 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-335938" [13fe97cd-13dd-4429-a397-6ad3380c9673] Running
	I0229 02:07:35.268410  350824 system_pods.go:61] "kube-proxy-6862w" [12cfe8f8-17f3-4d52-a68b-e50cbfc16f74] Running
	I0229 02:07:35.268416  350824 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-335938" [cf7861d4-def2-4b94-9d50-7129f74d651b] Running
	I0229 02:07:35.268425  350824 system_pods.go:61] "storage-provisioner" [a6006131-44fc-4185-884a-ce0d353924e0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:07:35.268441  350824 system_pods.go:74] duration metric: took 9.969856ms to wait for pod list to return data ...
	I0229 02:07:35.268467  350824 kubeadm.go:581] duration metric: took 316.139642ms to wait for : map[apiserver:true system_pods:true] ...
	I0229 02:07:35.268490  350824 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:07:35.268895  350824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:07:35.272347  350824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:07:35.272382  350824 node_conditions.go:123] node cpu capacity is 2
	I0229 02:07:35.272397  350824 node_conditions.go:105] duration metric: took 3.899639ms to run NodePressure ...
	I0229 02:07:35.272412  350824 start.go:228] waiting for startup goroutines ...
	I0229 02:07:35.565746  350824 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:35.565778  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .Close
	I0229 02:07:35.566122  350824 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:35.566184  350824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:35.566213  350824 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:35.566230  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .Close
	I0229 02:07:35.566138  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Closing plugin on server side
	I0229 02:07:35.566646  350824 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:35.566663  350824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:35.566943  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Closing plugin on server side
	I0229 02:07:35.576158  350824 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:35.576191  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .Close
	I0229 02:07:35.576486  350824 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:35.576539  350824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:35.576575  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) DBG | Closing plugin on server side
	I0229 02:07:36.507591  350824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.23865732s)
	I0229 02:07:36.507645  350824 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:36.507656  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .Close
	I0229 02:07:36.507987  350824 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:36.508003  350824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:36.508012  350824 main.go:141] libmachine: Making call to close driver server
	I0229 02:07:36.508018  350824 main.go:141] libmachine: (kubernetes-upgrade-335938) Calling .Close
	I0229 02:07:36.508392  350824 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:07:36.508669  350824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:07:36.743666  350824 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0229 02:07:36.783662  350824 addons.go:505] enable addons completed in 1.83812527s: enabled=[default-storageclass storage-provisioner]
	I0229 02:07:36.783736  350824 start.go:233] waiting for cluster config update ...
	I0229 02:07:36.783754  350824 start.go:242] writing updated cluster config ...
	I0229 02:07:36.877613  350824 ssh_runner.go:195] Run: rm -f paused
	I0229 02:07:36.934215  350824 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:07:37.022687  350824 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-335938" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8291c953ce988       6e38f40d628db       3 seconds ago       Running             storage-provisioner       1                   9e35dcf7e1509       storage-provisioner
	411140276b679       cbb01a7bd410d       31 seconds ago      Running             coredns                   0                   e40a138046e5a       coredns-76f75df574-ck24q
	a0123892ecea1       cc0a4f00aad7b       33 seconds ago      Running             kube-proxy                0                   bae80bb9f8bf0       kube-proxy-6862w
	78c483812ce39       6e38f40d628db       34 seconds ago      Exited              storage-provisioner       0                   9e35dcf7e1509       storage-provisioner
	90448f2a0e6b9       4270645ed6b7a       53 seconds ago      Running             kube-scheduler            0                   82de0e80964a0       kube-scheduler-kubernetes-upgrade-335938
	1e6163c9790a8       d4e01cdf63970       53 seconds ago      Running             kube-controller-manager   0                   d17ab8747372b       kube-controller-manager-kubernetes-upgrade-335938
	74b26acdc623c       bbb47a0f83324       53 seconds ago      Running             kube-apiserver            0                   5a0ead32e04f1       kube-apiserver-kubernetes-upgrade-335938
	9fafe8794413f       a0eed15eed449       53 seconds ago      Running             etcd                      0                   9f04a84968eb4       etcd-kubernetes-upgrade-335938
	
	
	==> containerd <==
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.125354824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.125406580Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.125424029Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.125445365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.126096609Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false B
aseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: Tolerate
MissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.126209053Z" level=info msg="Connect containerd service"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.126268836Z" level=info msg="using legacy CRI server"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.126278468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.126327034Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.128393246Z" level=info msg="Start subscribing containerd event"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.128600538Z" level=info msg="Start recovering state"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.128685855Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.128816375Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.362632742Z" level=info msg="Start event monitor"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.362711932Z" level=info msg="Start snapshots syncer"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.362732419Z" level=info msg="Start cni network conf syncer for default"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.362743251Z" level=info msg="Start streaming server"
	Feb 29 02:07:32 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:32.362778051Z" level=info msg="containerd successfully booted in 0.515693s"
	Feb 29 02:07:34 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:34.452770433Z" level=info msg="shim disconnected" id=78c483812ce39f712a5772be326e575fb6a3d525dd9e32f122598ce511212555 namespace=k8s.io
	Feb 29 02:07:34 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:34.453551309Z" level=warning msg="cleaning up after shim disconnected" id=78c483812ce39f712a5772be326e575fb6a3d525dd9e32f122598ce511212555 namespace=k8s.io
	Feb 29 02:07:34 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:34.453644850Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 29 02:07:35 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:35.080077839Z" level=info msg="CreateContainer within sandbox \"9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e17c76c9de\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Feb 29 02:07:35 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:35.132690340Z" level=info msg="CreateContainer within sandbox \"9e35dcf7e1509005c82f7687f8b5befd7ee1e18341e7ec732f8ba6e17c76c9de\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"8291c953ce988cae49f4c4ac00065b0b78ae7c387aea2a54fab9970209ad55ee\""
	Feb 29 02:07:35 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:35.138133752Z" level=info msg="StartContainer for \"8291c953ce988cae49f4c4ac00065b0b78ae7c387aea2a54fab9970209ad55ee\""
	Feb 29 02:07:35 kubernetes-upgrade-335938 containerd[1701]: time="2024-02-29T02:07:35.310511327Z" level=info msg="StartContainer for \"8291c953ce988cae49f4c4ac00065b0b78ae7c387aea2a54fab9970209ad55ee\" returns successfully"
	
	
	==> coredns [411140276b679b2e9741606990dee6566b35d160ca11d293df22e8b4b1d27a33] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53458 - 53871 "HINFO IN 953916620742989287.5808114956502259787. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013912468s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-335938
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-335938
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:06:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-335938
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:07:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:07:08 +0000   Thu, 29 Feb 2024 02:06:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:07:08 +0000   Thu, 29 Feb 2024 02:06:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:07:08 +0000   Thu, 29 Feb 2024 02:06:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:07:08 +0000   Thu, 29 Feb 2024 02:06:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.62
	  Hostname:    kubernetes-upgrade-335938
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 604b6110abad46dca600134d5f6d5e28
	  System UUID:                604b6110-abad-46dc-a600-134d5f6d5e28
	  Boot ID:                    66a7a9d7-9f60-4f1a-8219-875abc5db79d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-ck24q                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     36s
	  kube-system                 etcd-kubernetes-upgrade-335938                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         47s
	  kube-system                 kube-apiserver-kubernetes-upgrade-335938             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-335938    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-proxy-6862w                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-scheduler-kubernetes-upgrade-335938             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 34s                kube-proxy       
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node kubernetes-upgrade-335938 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node kubernetes-upgrade-335938 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 56s)  kubelet          Node kubernetes-upgrade-335938 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           36s                node-controller  Node kubernetes-upgrade-335938 event: Registered Node kubernetes-upgrade-335938 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063475] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047197] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.346928] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.246749] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.819099] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.564840] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +0.072999] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084138] systemd-fstab-generator[492]: Ignoring "noauto" option for root device
	[  +0.179981] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +0.166034] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.326183] systemd-fstab-generator[547]: Ignoring "noauto" option for root device
	[  +6.075875] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.056685] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.602768] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[ +11.328120] kauditd_printk_skb: 97 callbacks suppressed
	[Feb29 02:07] systemd-fstab-generator[1628]: Ignoring "noauto" option for root device
	[  +0.099071] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.069964] systemd-fstab-generator[1640]: Ignoring "noauto" option for root device
	[  +0.204385] systemd-fstab-generator[1654]: Ignoring "noauto" option for root device
	[  +0.157084] systemd-fstab-generator[1666]: Ignoring "noauto" option for root device
	[  +0.374564] systemd-fstab-generator[1693]: Ignoring "noauto" option for root device
	
	
	==> etcd [9fafe8794413f107758942a1854756857bd6f0171f9e2b0e9fc0747650377c33] <==
	{"level":"info","ts":"2024-02-29T02:06:45.600227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:06:45.600239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 received MsgVoteResp from 48d332b29d0cdf97 at term 2"}
	{"level":"info","ts":"2024-02-29T02:06:45.600296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T02:06:45.600309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 48d332b29d0cdf97 elected leader 48d332b29d0cdf97 at term 2"}
	{"level":"info","ts":"2024-02-29T02:06:45.605199Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:06:45.60693Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"48d332b29d0cdf97","local-member-attributes":"{Name:kubernetes-upgrade-335938 ClientURLs:[https://192.168.50.62:2379]}","request-path":"/0/members/48d332b29d0cdf97/attributes","cluster-id":"4f4301e400b1ef13","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:06:45.610026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:06:45.615243Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.62:2379"}
	{"level":"info","ts":"2024-02-29T02:06:45.620341Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:06:45.620602Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:06:45.620667Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:06:45.621267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:06:45.62664Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:06:45.63525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:06:45.639022Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:07:07.075926Z","caller":"traceutil/trace.go:171","msg":"trace[1341216548] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"111.375332ms","start":"2024-02-29T02:07:06.964538Z","end":"2024-02-29T02:07:07.075914Z","steps":["trace[1341216548] 'process raft request'  (duration: 111.300507ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:07:08.203018Z","caller":"traceutil/trace.go:171","msg":"trace[169463814] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"221.154712ms","start":"2024-02-29T02:07:07.98178Z","end":"2024-02-29T02:07:08.202935Z","steps":["trace[169463814] 'process raft request'  (duration: 221.042459ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:07:36.351561Z","caller":"traceutil/trace.go:171","msg":"trace[1792645583] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"267.717733ms","start":"2024-02-29T02:07:36.083802Z","end":"2024-02-29T02:07:36.35152Z","steps":["trace[1792645583] 'process raft request'  (duration: 267.548505ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:07:36.354262Z","caller":"traceutil/trace.go:171","msg":"trace[184791119] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"200.454279ms","start":"2024-02-29T02:07:36.153792Z","end":"2024-02-29T02:07:36.354246Z","steps":["trace[184791119] 'process raft request'  (duration: 200.377335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T02:07:39.324809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.817669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"range_response_count:1 size:3679"}
	{"level":"info","ts":"2024-02-29T02:07:39.326613Z","caller":"traceutil/trace.go:171","msg":"trace[842895787] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:392; }","duration":"353.656251ms","start":"2024-02-29T02:07:38.972859Z","end":"2024-02-29T02:07:39.326515Z","steps":["trace[842895787] 'range keys from in-memory index tree'  (duration: 351.657376ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T02:07:39.328632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.986449ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16111502265100183254 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-335938\" mod_revision:380 > success:<request_put:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-335938\" value_size:523 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-335938\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-02-29T02:07:39.329505Z","caller":"traceutil/trace.go:171","msg":"trace[1735417013] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"347.780672ms","start":"2024-02-29T02:07:38.981707Z","end":"2024-02-29T02:07:39.329488Z","steps":["trace[1735417013] 'process raft request'  (duration: 233.930167ms)","trace[1735417013] 'compare'  (duration: 111.892376ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T02:07:39.329867Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T02:07:38.981686Z","time spent":"347.96689ms","remote":"127.0.0.1:60316","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":589,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-335938\" mod_revision:380 > success:<request_put:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-335938\" value_size:523 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-335938\" > >"}
	{"level":"warn","ts":"2024-02-29T02:07:39.327326Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T02:07:38.972843Z","time spent":"354.449469ms","remote":"127.0.0.1:60232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":3701,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	
	
	==> kernel <==
	 02:07:39 up 1 min,  0 users,  load average: 0.66, 0.22, 0.08
	Linux kubernetes-upgrade-335938 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [74b26acdc623c943ac1a21bfd9474dc6b38509bd7bd5998706a787dba2965663] <==
	I0229 02:06:47.769448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:06:47.770026       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:06:47.770196       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:06:47.770641       1 controller.go:624] quota admission added evaluator for: namespaces
	I0229 02:06:47.770945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:06:47.771206       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 02:06:47.771335       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 02:06:47.771290       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:06:47.833583       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:06:47.836487       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:06:48.679855       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0229 02:06:48.698706       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0229 02:06:48.698945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 02:06:49.358418       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:06:49.415385       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0229 02:06:49.488395       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0229 02:06:49.495163       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.62]
	I0229 02:06:49.496183       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:06:49.500851       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 02:06:49.730905       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 02:06:53.895695       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 02:06:53.911288       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0229 02:06:53.929716       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 02:07:03.609626       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0229 02:07:03.780593       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1e6163c9790a827bb2ae2062c3d63338adc05380d53777496c8b4b4d3e42019c] <==
	I0229 02:07:03.531219       1 shared_informer.go:318] Caches are synced for taint
	I0229 02:07:03.533275       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0229 02:07:03.533542       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="kubernetes-upgrade-335938"
	I0229 02:07:03.533749       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0229 02:07:03.531241       1 shared_informer.go:318] Caches are synced for PVC protection
	I0229 02:07:03.535158       1 event.go:376] "Event occurred" object="kubernetes-upgrade-335938" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node kubernetes-upgrade-335938 event: Registered Node kubernetes-upgrade-335938 in Controller"
	I0229 02:07:03.580044       1 shared_informer.go:318] Caches are synced for service account
	I0229 02:07:03.597429       1 shared_informer.go:318] Caches are synced for namespace
	I0229 02:07:03.646081       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:07:03.676185       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6862w"
	I0229 02:07:03.676541       1 shared_informer.go:318] Caches are synced for disruption
	I0229 02:07:03.680008       1 shared_informer.go:318] Caches are synced for deployment
	I0229 02:07:03.694431       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:07:03.806649       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 1"
	I0229 02:07:03.882778       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-ck24q"
	I0229 02:07:03.905147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="106.645737ms"
	I0229 02:07:03.938852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="31.945045ms"
	I0229 02:07:03.950105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="155.61µs"
	I0229 02:07:03.952678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="44.952µs"
	I0229 02:07:04.051543       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:07:04.078194       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:07:04.078896       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 02:07:08.207794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="58.903µs"
	I0229 02:07:08.268045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.263459ms"
	I0229 02:07:08.268602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="60.811µs"
	
	
	==> kube-proxy [a0123892ecea132a6137daf45e3b7e773957dd6395ef0dac9d9787c0380cc924] <==
	I0229 02:07:04.464758       1 server_others.go:72] "Using iptables proxy"
	I0229 02:07:04.482622       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.62"]
	I0229 02:07:04.562097       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 02:07:04.562154       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:07:04.562191       1 server_others.go:168] "Using iptables Proxier"
	I0229 02:07:04.565747       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:07:04.566150       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 02:07:04.566167       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:07:04.567231       1 config.go:188] "Starting service config controller"
	I0229 02:07:04.567872       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:07:04.568476       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:07:04.568612       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:07:04.570840       1 config.go:315] "Starting node config controller"
	I0229 02:07:04.573115       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:07:04.669374       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:07:04.669466       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:07:04.673463       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [90448f2a0e6b9b45faa862f388769edbeee26fc729f76ebb9ac2195f91b6cf9e] <==
	W0229 02:06:47.815944       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:06:47.818283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:06:48.764175       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 02:06:48.764202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 02:06:48.792830       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:06:48.792892       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:06:48.813701       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:06:48.813770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 02:06:48.876184       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:06:48.876258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 02:06:48.916722       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 02:06:48.916785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 02:06:48.997836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:06:48.998243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:06:49.004321       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 02:06:49.004369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 02:06:49.006649       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 02:06:49.006740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 02:06:49.037355       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 02:06:49.037402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 02:06:49.056901       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 02:06:49.057249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 02:06:49.081439       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 02:06:49.081513       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 02:06:50.789430       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:06:58 kubernetes-upgrade-335938 kubelet[758]: I0229 02:06:58.246496     758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-335938" podStartSLOduration=1.246442068 podStartE2EDuration="1.246442068s" podCreationTimestamp="2024-02-29 02:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:06:58.235517534 +0000 UTC m=+14.855109247" watchObservedRunningTime="2024-02-29 02:06:58.246442068 +0000 UTC m=+14.866033784"
	Feb 29 02:07:00 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:00.288595     758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-335938" podStartSLOduration=2.288547826 podStartE2EDuration="2.288547826s" podCreationTimestamp="2024-02-29 02:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:06:58.247499707 +0000 UTC m=+14.867091420" watchObservedRunningTime="2024-02-29 02:07:00.288547826 +0000 UTC m=+16.908139539"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.614831     758 topology_manager.go:215] "Topology Admit Handler" podUID="a6006131-44fc-4185-884a-ce0d353924e0" podNamespace="kube-system" podName="storage-provisioner"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.706160     758 topology_manager.go:215] "Topology Admit Handler" podUID="12cfe8f8-17f3-4d52-a68b-e50cbfc16f74" podNamespace="kube-system" podName="kube-proxy-6862w"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.710844     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ltp\" (UniqueName: \"kubernetes.io/projected/a6006131-44fc-4185-884a-ce0d353924e0-kube-api-access-d2ltp\") pod \"storage-provisioner\" (UID: \"a6006131-44fc-4185-884a-ce0d353924e0\") " pod="kube-system/storage-provisioner"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.711048     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a6006131-44fc-4185-884a-ce0d353924e0-tmp\") pod \"storage-provisioner\" (UID: \"a6006131-44fc-4185-884a-ce0d353924e0\") " pod="kube-system/storage-provisioner"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.811818     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12cfe8f8-17f3-4d52-a68b-e50cbfc16f74-lib-modules\") pod \"kube-proxy-6862w\" (UID: \"12cfe8f8-17f3-4d52-a68b-e50cbfc16f74\") " pod="kube-system/kube-proxy-6862w"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.811910     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6b5\" (UniqueName: \"kubernetes.io/projected/12cfe8f8-17f3-4d52-a68b-e50cbfc16f74-kube-api-access-cw6b5\") pod \"kube-proxy-6862w\" (UID: \"12cfe8f8-17f3-4d52-a68b-e50cbfc16f74\") " pod="kube-system/kube-proxy-6862w"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.812013     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12cfe8f8-17f3-4d52-a68b-e50cbfc16f74-xtables-lock\") pod \"kube-proxy-6862w\" (UID: \"12cfe8f8-17f3-4d52-a68b-e50cbfc16f74\") " pod="kube-system/kube-proxy-6862w"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.812065     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12cfe8f8-17f3-4d52-a68b-e50cbfc16f74-kube-proxy\") pod \"kube-proxy-6862w\" (UID: \"12cfe8f8-17f3-4d52-a68b-e50cbfc16f74\") " pod="kube-system/kube-proxy-6862w"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:03.892525     758 topology_manager.go:215] "Topology Admit Handler" podUID="b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3" podNamespace="kube-system" podName="coredns-76f75df574-ck24q"
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: W0229 02:07:03.899076     758 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:kubernetes-upgrade-335938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-335938' and this object
	Feb 29 02:07:03 kubernetes-upgrade-335938 kubelet[758]: E0229 02:07:03.899217     758 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:kubernetes-upgrade-335938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-335938' and this object
	Feb 29 02:07:04 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:04.013791     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx4pv\" (UniqueName: \"kubernetes.io/projected/b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3-kube-api-access-hx4pv\") pod \"coredns-76f75df574-ck24q\" (UID: \"b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3\") " pod="kube-system/coredns-76f75df574-ck24q"
	Feb 29 02:07:04 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:04.016068     758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3-config-volume\") pod \"coredns-76f75df574-ck24q\" (UID: \"b9c803d9-4ebf-4bf6-b7ea-d4187df2e1e3\") " pod="kube-system/coredns-76f75df574-ck24q"
	Feb 29 02:07:04 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:04.859692     758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.859635201 podStartE2EDuration="10.859635201s" podCreationTimestamp="2024-02-29 02:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:07:04.859388547 +0000 UTC m=+21.478980282" watchObservedRunningTime="2024-02-29 02:07:04.859635201 +0000 UTC m=+21.479226914"
	Feb 29 02:07:04 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:04.860360     758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6862w" podStartSLOduration=1.860304346 podStartE2EDuration="1.860304346s" podCreationTimestamp="2024-02-29 02:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:07:04.807171384 +0000 UTC m=+21.426763101" watchObservedRunningTime="2024-02-29 02:07:04.860304346 +0000 UTC m=+21.479896261"
	Feb 29 02:07:08 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:08.248346     758 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ck24q" podStartSLOduration=5.248254784 podStartE2EDuration="5.248254784s" podCreationTimestamp="2024-02-29 02:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:07:08.208925716 +0000 UTC m=+24.828517449" watchObservedRunningTime="2024-02-29 02:07:08.248254784 +0000 UTC m=+24.867846497"
	Feb 29 02:07:08 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:08.461116     758 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 29 02:07:08 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:08.462252     758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 29 02:07:32 kubernetes-upgrade-335938 kubelet[758]: W0229 02:07:32.038437     758 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "%!F(MISSING)run%!F(MISSING)containerd%!F(MISSING)containerd.sock", }. Err: connection error: desc = "transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory"
	Feb 29 02:07:32 kubernetes-upgrade-335938 kubelet[758]: E0229 02:07:32.038604     758 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Feb 29 02:07:32 kubernetes-upgrade-335938 kubelet[758]: E0229 02:07:32.038654     758 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Feb 29 02:07:32 kubernetes-upgrade-335938 kubelet[758]: E0229 02:07:32.039170     758 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Feb 29 02:07:35 kubernetes-upgrade-335938 kubelet[758]: I0229 02:07:35.067429     758 scope.go:117] "RemoveContainer" containerID="78c483812ce39f712a5772be326e575fb6a3d525dd9e32f122598ce511212555"
	
	
	==> storage-provisioner [78c483812ce39f712a5772be326e575fb6a3d525dd9e32f122598ce511212555] <==
	I0229 02:07:04.326860       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 02:07:34.334917       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8291c953ce988cae49f4c4ac00065b0b78ae7c387aea2a54fab9970209ad55ee] <==
	I0229 02:07:35.340431       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:07:35.358860       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:07:35.358945       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:07:35.407267       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:07:35.407594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-335938_c83f270e-c15d-4684-bd62-1e9099939b18!
	I0229 02:07:35.408181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a29967f7-0c13-42eb-a435-68ddcbff77fa", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-335938_c83f270e-c15d-4684-bd62-1e9099939b18 became leader
	I0229 02:07:35.508361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-335938_c83f270e-c15d-4684-bd62-1e9099939b18!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-335938 -n kubernetes-upgrade-335938
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-335938 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-335938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-335938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-335938: (1.297863831s)
--- FAIL: TestKubernetesUpgrade (387.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (295.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-254968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-254968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 109 (4m54.849132924s)

                                                
                                                
-- stdout --
	* [old-k8s-version-254968] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node old-k8s-version-254968 in cluster old-k8s-version-254968
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:07:41.704977  352939 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:07:41.705079  352939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:07:41.705083  352939 out.go:304] Setting ErrFile to fd 2...
	I0229 02:07:41.705088  352939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:07:41.705296  352939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:07:41.708079  352939 out.go:298] Setting JSON to false
	I0229 02:07:41.709089  352939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6606,"bootTime":1709165856,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:07:41.709157  352939 start.go:139] virtualization: kvm guest
	I0229 02:07:41.711142  352939 out.go:177] * [old-k8s-version-254968] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:07:41.713258  352939 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:07:41.713319  352939 notify.go:220] Checking for updates...
	I0229 02:07:41.714891  352939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:07:41.717194  352939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:07:41.718806  352939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:07:41.720205  352939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:07:41.721848  352939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:07:41.723657  352939 config.go:182] Loaded profile config "bridge-704272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:07:41.723776  352939 config.go:182] Loaded profile config "enable-default-cni-704272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:07:41.723871  352939 config.go:182] Loaded profile config "flannel-704272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:07:41.723997  352939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:07:41.768138  352939 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:07:41.770197  352939 start.go:299] selected driver: kvm2
	I0229 02:07:41.770219  352939 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:07:41.770237  352939 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:07:41.771290  352939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:07:41.771410  352939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:07:41.788009  352939 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:07:41.788080  352939 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:07:41.788388  352939 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:07:41.788483  352939 cni.go:84] Creating CNI manager for ""
	I0229 02:07:41.788500  352939 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:07:41.788512  352939 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:07:41.788524  352939 start_flags.go:323] config:
	{Name:old-k8s-version-254968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:07:41.788719  352939 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:07:41.791730  352939 out.go:177] * Starting control plane node old-k8s-version-254968 in cluster old-k8s-version-254968
	I0229 02:07:41.793140  352939 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 02:07:41.793192  352939 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 02:07:41.793215  352939 cache.go:56] Caching tarball of preloaded images
	I0229 02:07:41.793328  352939 preload.go:174] Found /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:07:41.793343  352939 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 02:07:41.793481  352939 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/config.json ...
	I0229 02:07:41.793509  352939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/config.json: {Name:mkbb8358306f40bb862484a128b4ae01d75bf612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:07:41.793672  352939 start.go:365] acquiring machines lock for old-k8s-version-254968: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:08:01.007634  352939 start.go:369] acquired machines lock for "old-k8s-version-254968" in 19.213930262s
	I0229 02:08:01.007711  352939 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-254968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:08:01.007845  352939 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 02:08:01.009643  352939 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:08:01.009936  352939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:08:01.009998  352939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:08:01.030363  352939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0229 02:08:01.030834  352939 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:08:01.031403  352939 main.go:141] libmachine: Using API Version  1
	I0229 02:08:01.031424  352939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:08:01.031889  352939 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:08:01.032130  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetMachineName
	I0229 02:08:01.032294  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:01.032447  352939 start.go:159] libmachine.API.Create for "old-k8s-version-254968" (driver="kvm2")
	I0229 02:08:01.032477  352939 client.go:168] LocalClient.Create starting
	I0229 02:08:01.032512  352939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem
	I0229 02:08:01.032552  352939 main.go:141] libmachine: Decoding PEM data...
	I0229 02:08:01.032572  352939 main.go:141] libmachine: Parsing certificate...
	I0229 02:08:01.032654  352939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem
	I0229 02:08:01.032687  352939 main.go:141] libmachine: Decoding PEM data...
	I0229 02:08:01.032705  352939 main.go:141] libmachine: Parsing certificate...
	I0229 02:08:01.032729  352939 main.go:141] libmachine: Running pre-create checks...
	I0229 02:08:01.032745  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .PreCreateCheck
	I0229 02:08:01.033115  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetConfigRaw
	I0229 02:08:01.033546  352939 main.go:141] libmachine: Creating machine...
	I0229 02:08:01.033563  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .Create
	I0229 02:08:01.033677  352939 main.go:141] libmachine: (old-k8s-version-254968) Creating KVM machine...
	I0229 02:08:01.034917  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found existing default KVM network
	I0229 02:08:01.036635  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:01.036471  353079 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:0f:69} reservation:<nil>}
	I0229 02:08:01.037812  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:01.037723  353079 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028aab0}
	I0229 02:08:01.042960  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | trying to create private KVM network mk-old-k8s-version-254968 192.168.50.0/24...
	I0229 02:08:01.118662  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | private KVM network mk-old-k8s-version-254968 192.168.50.0/24 created
	I0229 02:08:01.118718  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:01.118614  353079 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:08:01.118737  352939 main.go:141] libmachine: (old-k8s-version-254968) Setting up store path in /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968 ...
	I0229 02:08:01.118770  352939 main.go:141] libmachine: (old-k8s-version-254968) Building disk image from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 02:08:01.118787  352939 main.go:141] libmachine: (old-k8s-version-254968) Downloading /home/jenkins/minikube-integration/18063-309085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:08:01.399961  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:01.399805  353079 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa...
	I0229 02:08:01.652866  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:01.652702  353079 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/old-k8s-version-254968.rawdisk...
	I0229 02:08:01.652913  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Writing magic tar header
	I0229 02:08:01.652934  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Writing SSH key tar header
	I0229 02:08:01.652952  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:01.652859  353079 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968 ...
	I0229 02:08:01.653059  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968
	I0229 02:08:01.653095  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines
	I0229 02:08:01.653120  352939 main.go:141] libmachine: (old-k8s-version-254968) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968 (perms=drwx------)
	I0229 02:08:01.653142  352939 main.go:141] libmachine: (old-k8s-version-254968) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines (perms=drwxr-xr-x)
	I0229 02:08:01.653155  352939 main.go:141] libmachine: (old-k8s-version-254968) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube (perms=drwxr-xr-x)
	I0229 02:08:01.653171  352939 main.go:141] libmachine: (old-k8s-version-254968) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085 (perms=drwxrwxr-x)
	I0229 02:08:01.653183  352939 main.go:141] libmachine: (old-k8s-version-254968) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 02:08:01.653205  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:08:01.653219  352939 main.go:141] libmachine: (old-k8s-version-254968) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 02:08:01.653236  352939 main.go:141] libmachine: (old-k8s-version-254968) Creating domain...
	I0229 02:08:01.653253  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085
	I0229 02:08:01.653265  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 02:08:01.653317  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Checking permissions on dir: /home/jenkins
	I0229 02:08:01.653336  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Checking permissions on dir: /home
	I0229 02:08:01.653351  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Skipping /home - not owner
	I0229 02:08:01.654502  352939 main.go:141] libmachine: (old-k8s-version-254968) define libvirt domain using xml: 
	I0229 02:08:01.654537  352939 main.go:141] libmachine: (old-k8s-version-254968) <domain type='kvm'>
	I0229 02:08:01.654547  352939 main.go:141] libmachine: (old-k8s-version-254968)   <name>old-k8s-version-254968</name>
	I0229 02:08:01.654556  352939 main.go:141] libmachine: (old-k8s-version-254968)   <memory unit='MiB'>2200</memory>
	I0229 02:08:01.654573  352939 main.go:141] libmachine: (old-k8s-version-254968)   <vcpu>2</vcpu>
	I0229 02:08:01.654580  352939 main.go:141] libmachine: (old-k8s-version-254968)   <features>
	I0229 02:08:01.654588  352939 main.go:141] libmachine: (old-k8s-version-254968)     <acpi/>
	I0229 02:08:01.654602  352939 main.go:141] libmachine: (old-k8s-version-254968)     <apic/>
	I0229 02:08:01.654609  352939 main.go:141] libmachine: (old-k8s-version-254968)     <pae/>
	I0229 02:08:01.654620  352939 main.go:141] libmachine: (old-k8s-version-254968)     
	I0229 02:08:01.654629  352939 main.go:141] libmachine: (old-k8s-version-254968)   </features>
	I0229 02:08:01.654637  352939 main.go:141] libmachine: (old-k8s-version-254968)   <cpu mode='host-passthrough'>
	I0229 02:08:01.654645  352939 main.go:141] libmachine: (old-k8s-version-254968)   
	I0229 02:08:01.654651  352939 main.go:141] libmachine: (old-k8s-version-254968)   </cpu>
	I0229 02:08:01.654659  352939 main.go:141] libmachine: (old-k8s-version-254968)   <os>
	I0229 02:08:01.654665  352939 main.go:141] libmachine: (old-k8s-version-254968)     <type>hvm</type>
	I0229 02:08:01.654673  352939 main.go:141] libmachine: (old-k8s-version-254968)     <boot dev='cdrom'/>
	I0229 02:08:01.654680  352939 main.go:141] libmachine: (old-k8s-version-254968)     <boot dev='hd'/>
	I0229 02:08:01.654710  352939 main.go:141] libmachine: (old-k8s-version-254968)     <bootmenu enable='no'/>
	I0229 02:08:01.654730  352939 main.go:141] libmachine: (old-k8s-version-254968)   </os>
	I0229 02:08:01.654741  352939 main.go:141] libmachine: (old-k8s-version-254968)   <devices>
	I0229 02:08:01.654759  352939 main.go:141] libmachine: (old-k8s-version-254968)     <disk type='file' device='cdrom'>
	I0229 02:08:01.654773  352939 main.go:141] libmachine: (old-k8s-version-254968)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/boot2docker.iso'/>
	I0229 02:08:01.654781  352939 main.go:141] libmachine: (old-k8s-version-254968)       <target dev='hdc' bus='scsi'/>
	I0229 02:08:01.654790  352939 main.go:141] libmachine: (old-k8s-version-254968)       <readonly/>
	I0229 02:08:01.654802  352939 main.go:141] libmachine: (old-k8s-version-254968)     </disk>
	I0229 02:08:01.654813  352939 main.go:141] libmachine: (old-k8s-version-254968)     <disk type='file' device='disk'>
	I0229 02:08:01.654822  352939 main.go:141] libmachine: (old-k8s-version-254968)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 02:08:01.654837  352939 main.go:141] libmachine: (old-k8s-version-254968)       <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/old-k8s-version-254968.rawdisk'/>
	I0229 02:08:01.654845  352939 main.go:141] libmachine: (old-k8s-version-254968)       <target dev='hda' bus='virtio'/>
	I0229 02:08:01.654852  352939 main.go:141] libmachine: (old-k8s-version-254968)     </disk>
	I0229 02:08:01.654860  352939 main.go:141] libmachine: (old-k8s-version-254968)     <interface type='network'>
	I0229 02:08:01.654884  352939 main.go:141] libmachine: (old-k8s-version-254968)       <source network='mk-old-k8s-version-254968'/>
	I0229 02:08:01.654906  352939 main.go:141] libmachine: (old-k8s-version-254968)       <model type='virtio'/>
	I0229 02:08:01.654915  352939 main.go:141] libmachine: (old-k8s-version-254968)     </interface>
	I0229 02:08:01.654923  352939 main.go:141] libmachine: (old-k8s-version-254968)     <interface type='network'>
	I0229 02:08:01.654933  352939 main.go:141] libmachine: (old-k8s-version-254968)       <source network='default'/>
	I0229 02:08:01.654940  352939 main.go:141] libmachine: (old-k8s-version-254968)       <model type='virtio'/>
	I0229 02:08:01.654951  352939 main.go:141] libmachine: (old-k8s-version-254968)     </interface>
	I0229 02:08:01.654962  352939 main.go:141] libmachine: (old-k8s-version-254968)     <serial type='pty'>
	I0229 02:08:01.654971  352939 main.go:141] libmachine: (old-k8s-version-254968)       <target port='0'/>
	I0229 02:08:01.654983  352939 main.go:141] libmachine: (old-k8s-version-254968)     </serial>
	I0229 02:08:01.654992  352939 main.go:141] libmachine: (old-k8s-version-254968)     <console type='pty'>
	I0229 02:08:01.654999  352939 main.go:141] libmachine: (old-k8s-version-254968)       <target type='serial' port='0'/>
	I0229 02:08:01.655008  352939 main.go:141] libmachine: (old-k8s-version-254968)     </console>
	I0229 02:08:01.655014  352939 main.go:141] libmachine: (old-k8s-version-254968)     <rng model='virtio'>
	I0229 02:08:01.655022  352939 main.go:141] libmachine: (old-k8s-version-254968)       <backend model='random'>/dev/random</backend>
	I0229 02:08:01.655028  352939 main.go:141] libmachine: (old-k8s-version-254968)     </rng>
	I0229 02:08:01.655034  352939 main.go:141] libmachine: (old-k8s-version-254968)     
	I0229 02:08:01.655041  352939 main.go:141] libmachine: (old-k8s-version-254968)     
	I0229 02:08:01.655048  352939 main.go:141] libmachine: (old-k8s-version-254968)   </devices>
	I0229 02:08:01.655055  352939 main.go:141] libmachine: (old-k8s-version-254968) </domain>
	I0229 02:08:01.655064  352939 main.go:141] libmachine: (old-k8s-version-254968) 
	I0229 02:08:01.659142  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:86:cb:a0 in network default
	I0229 02:08:01.659803  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:01.659838  352939 main.go:141] libmachine: (old-k8s-version-254968) Ensuring networks are active...
	I0229 02:08:01.660669  352939 main.go:141] libmachine: (old-k8s-version-254968) Ensuring network default is active
	I0229 02:08:01.661084  352939 main.go:141] libmachine: (old-k8s-version-254968) Ensuring network mk-old-k8s-version-254968 is active
	I0229 02:08:01.661883  352939 main.go:141] libmachine: (old-k8s-version-254968) Getting domain xml...
	I0229 02:08:01.662794  352939 main.go:141] libmachine: (old-k8s-version-254968) Creating domain...
	I0229 02:08:02.924245  352939 main.go:141] libmachine: (old-k8s-version-254968) Waiting to get IP...
	I0229 02:08:02.925061  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:02.925504  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:02.925535  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:02.925482  353079 retry.go:31] will retry after 202.899151ms: waiting for machine to come up
	I0229 02:08:03.131026  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:03.131668  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:03.131689  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:03.131587  353079 retry.go:31] will retry after 285.750874ms: waiting for machine to come up
	I0229 02:08:03.419422  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:03.420180  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:03.420212  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:03.420126  353079 retry.go:31] will retry after 394.913015ms: waiting for machine to come up
	I0229 02:08:03.817398  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:03.818063  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:03.818110  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:03.817965  353079 retry.go:31] will retry after 574.028462ms: waiting for machine to come up
	I0229 02:08:04.397334  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:04.400060  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:04.400120  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:04.400020  353079 retry.go:31] will retry after 545.023112ms: waiting for machine to come up
	I0229 02:08:04.946872  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:04.947928  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:04.947954  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:04.947827  353079 retry.go:31] will retry after 612.076685ms: waiting for machine to come up
	I0229 02:08:05.561255  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:05.562026  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:05.562058  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:05.561961  353079 retry.go:31] will retry after 1.148430706s: waiting for machine to come up
	I0229 02:08:06.712563  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:06.713156  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:06.713177  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:06.713059  353079 retry.go:31] will retry after 1.283252839s: waiting for machine to come up
	I0229 02:08:07.998620  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:07.999232  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:07.999289  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:07.999196  353079 retry.go:31] will retry after 1.637282044s: waiting for machine to come up
	I0229 02:08:09.638681  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:09.639248  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:09.639277  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:09.639195  353079 retry.go:31] will retry after 1.926029879s: waiting for machine to come up
	I0229 02:08:11.566673  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:11.567250  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:11.567299  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:11.567214  353079 retry.go:31] will retry after 2.907843798s: waiting for machine to come up
	I0229 02:08:14.476845  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:14.477278  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:14.477346  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:14.477258  353079 retry.go:31] will retry after 2.540994187s: waiting for machine to come up
	I0229 02:08:17.020268  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:17.020743  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:17.020780  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:17.020663  353079 retry.go:31] will retry after 3.229385889s: waiting for machine to come up
	I0229 02:08:20.251117  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:20.251494  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:08:20.251522  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:08:20.251450  353079 retry.go:31] will retry after 3.98495326s: waiting for machine to come up
	I0229 02:08:24.239369  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:24.239958  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has current primary IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:24.240002  352939 main.go:141] libmachine: (old-k8s-version-254968) Found IP for machine: 192.168.50.250
	I0229 02:08:24.240029  352939 main.go:141] libmachine: (old-k8s-version-254968) Reserving static IP address...
	I0229 02:08:24.240314  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-254968", mac: "52:54:00:7f:e3:b1", ip: "192.168.50.250"} in network mk-old-k8s-version-254968
	I0229 02:08:24.322804  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Getting to WaitForSSH function...
	I0229 02:08:24.322843  352939 main.go:141] libmachine: (old-k8s-version-254968) Reserved static IP address: 192.168.50.250
	I0229 02:08:24.322853  352939 main.go:141] libmachine: (old-k8s-version-254968) Waiting for SSH to be available...
	I0229 02:08:24.325870  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:24.326326  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968
	I0229 02:08:24.326349  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find defined IP address of network mk-old-k8s-version-254968 interface with MAC address 52:54:00:7f:e3:b1
	I0229 02:08:24.326591  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Using SSH client type: external
	I0229 02:08:24.326616  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa (-rw-------)
	I0229 02:08:24.326650  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:08:24.326668  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | About to run SSH command:
	I0229 02:08:24.326682  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | exit 0
	I0229 02:08:24.331214  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | SSH cmd err, output: exit status 255: 
	I0229 02:08:24.331239  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 02:08:24.331249  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | command : exit 0
	I0229 02:08:24.331268  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | err     : exit status 255
	I0229 02:08:24.331280  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | output  : 
	I0229 02:08:27.332331  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Getting to WaitForSSH function...
	I0229 02:08:27.334829  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.335281  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:27.335311  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.335488  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Using SSH client type: external
	I0229 02:08:27.335511  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa (-rw-------)
	I0229 02:08:27.335545  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:08:27.335563  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | About to run SSH command:
	I0229 02:08:27.335579  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | exit 0
	I0229 02:08:27.467072  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | SSH cmd err, output: <nil>: 
	I0229 02:08:27.467353  352939 main.go:141] libmachine: (old-k8s-version-254968) KVM machine creation complete!
	I0229 02:08:27.467727  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetConfigRaw
	I0229 02:08:27.468450  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:27.468734  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:27.468945  352939 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 02:08:27.468961  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetState
	I0229 02:08:27.470445  352939 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 02:08:27.470461  352939 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 02:08:27.470466  352939 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 02:08:27.470472  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:27.473121  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.473551  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:27.473593  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.473758  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:27.474003  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.474212  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.474410  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:27.474603  352939 main.go:141] libmachine: Using SSH client type: native
	I0229 02:08:27.474862  352939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:08:27.474881  352939 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 02:08:27.602050  352939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:08:27.602074  352939 main.go:141] libmachine: Detecting the provisioner...
	I0229 02:08:27.602107  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:27.607626  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.608041  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:27.608100  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.608226  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:27.608397  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.608568  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.608720  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:27.608926  352939 main.go:141] libmachine: Using SSH client type: native
	I0229 02:08:27.609097  352939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:08:27.609107  352939 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 02:08:27.732198  352939 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 02:08:27.732311  352939 main.go:141] libmachine: found compatible host: buildroot
	I0229 02:08:27.732325  352939 main.go:141] libmachine: Provisioning with buildroot...
	I0229 02:08:27.732336  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetMachineName
	I0229 02:08:27.732607  352939 buildroot.go:166] provisioning hostname "old-k8s-version-254968"
	I0229 02:08:27.732651  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetMachineName
	I0229 02:08:27.732856  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:27.736099  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.736494  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:27.736526  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.736677  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:27.736885  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.737087  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.737261  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:27.737428  352939 main.go:141] libmachine: Using SSH client type: native
	I0229 02:08:27.737668  352939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:08:27.737686  352939 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-254968 && echo "old-k8s-version-254968" | sudo tee /etc/hostname
	I0229 02:08:27.874874  352939 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-254968
	
	I0229 02:08:27.874909  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:27.878285  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.878686  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:27.878716  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:27.878908  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:27.879134  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.879340  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:27.879473  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:27.879683  352939 main.go:141] libmachine: Using SSH client type: native
	I0229 02:08:27.879913  352939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:08:27.879938  352939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-254968' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-254968/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-254968' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:08:28.007337  352939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:08:28.007368  352939 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:08:28.007398  352939 buildroot.go:174] setting up certificates
	I0229 02:08:28.007413  352939 provision.go:83] configureAuth start
	I0229 02:08:28.007428  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetMachineName
	I0229 02:08:28.007708  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:08:28.010742  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.011181  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.011211  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.011349  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:28.013566  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.013941  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.013969  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.014172  352939 provision.go:138] copyHostCerts
	I0229 02:08:28.014235  352939 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:08:28.014255  352939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:08:28.014330  352939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:08:28.014438  352939 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:08:28.014450  352939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:08:28.014481  352939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:08:28.014553  352939 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:08:28.014565  352939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:08:28.014593  352939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:08:28.014691  352939 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-254968 san=[192.168.50.250 192.168.50.250 localhost 127.0.0.1 minikube old-k8s-version-254968]
	I0229 02:08:28.157788  352939 provision.go:172] copyRemoteCerts
	I0229 02:08:28.157846  352939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:08:28.157878  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:28.160627  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.161086  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.161128  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.161392  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:28.161612  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:28.161821  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:28.162010  352939 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:08:28.256044  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:08:28.286939  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:08:28.319747  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:08:28.356540  352939 provision.go:86] duration metric: configureAuth took 349.112671ms
	I0229 02:08:28.356579  352939 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:08:28.356767  352939 config.go:182] Loaded profile config "old-k8s-version-254968": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 02:08:28.356791  352939 main.go:141] libmachine: Checking connection to Docker...
	I0229 02:08:28.356802  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetURL
	I0229 02:08:28.358307  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | Using libvirt version 6000000
	I0229 02:08:28.360892  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.361382  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.361407  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.361662  352939 main.go:141] libmachine: Docker is up and running!
	I0229 02:08:28.361688  352939 main.go:141] libmachine: Reticulating splines...
	I0229 02:08:28.361706  352939 client.go:171] LocalClient.Create took 27.329208118s
	I0229 02:08:28.361733  352939 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-254968" took 27.329288252s
	I0229 02:08:28.361751  352939 start.go:300] post-start starting for "old-k8s-version-254968" (driver="kvm2")
	I0229 02:08:28.361764  352939 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:08:28.361783  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:28.362092  352939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:08:28.362127  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:28.364808  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.365237  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.365267  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.365464  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:28.365682  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:28.365892  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:28.366118  352939 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:08:28.456318  352939 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:08:28.461254  352939 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:08:28.461299  352939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:08:28.461370  352939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:08:28.461442  352939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:08:28.461530  352939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:08:28.473698  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:08:28.502529  352939 start.go:303] post-start completed in 140.758524ms
	I0229 02:08:28.502589  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetConfigRaw
	I0229 02:08:28.503333  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:08:28.506166  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.506544  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.506572  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.506819  352939 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/config.json ...
	I0229 02:08:28.506989  352939 start.go:128] duration metric: createHost completed in 27.499133374s
	I0229 02:08:28.507011  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:28.509569  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.509956  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.509991  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.510112  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:28.510311  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:28.510487  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:28.510632  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:28.510808  352939 main.go:141] libmachine: Using SSH client type: native
	I0229 02:08:28.511048  352939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:08:28.511067  352939 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:08:28.636269  352939 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172508.617562583
	
	I0229 02:08:28.636295  352939 fix.go:206] guest clock: 1709172508.617562583
	I0229 02:08:28.636311  352939 fix.go:219] Guest: 2024-02-29 02:08:28.617562583 +0000 UTC Remote: 2024-02-29 02:08:28.507000841 +0000 UTC m=+46.863326447 (delta=110.561742ms)
	I0229 02:08:28.636337  352939 fix.go:190] guest clock delta is within tolerance: 110.561742ms
	I0229 02:08:28.636345  352939 start.go:83] releasing machines lock for "old-k8s-version-254968", held for 27.628683757s
	I0229 02:08:28.636370  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:28.636708  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:08:28.639823  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.640207  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.640232  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.640450  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:28.640924  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:28.641143  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:08:28.641252  352939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:08:28.641316  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:28.641557  352939 ssh_runner.go:195] Run: cat /version.json
	I0229 02:08:28.641586  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:08:28.644412  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.644609  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.644844  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.644888  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.644918  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:28.644983  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:28.645057  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:28.645181  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:08:28.645344  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:28.645352  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:08:28.645531  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:28.645545  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:08:28.645691  352939 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:08:28.645958  352939 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:08:28.761941  352939 ssh_runner.go:195] Run: systemctl --version
	I0229 02:08:28.770303  352939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:08:28.778049  352939 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:08:28.778133  352939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:08:28.797976  352939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:08:28.798006  352939 start.go:475] detecting cgroup driver to use...
	I0229 02:08:28.798113  352939 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:08:28.840738  352939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:08:28.860550  352939 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:08:28.860614  352939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:08:28.879352  352939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:08:28.897104  352939 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:08:29.079427  352939 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:08:29.264782  352939 docker.go:233] disabling docker service ...
	I0229 02:08:29.264868  352939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:08:29.285965  352939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:08:29.308992  352939 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:08:29.456661  352939 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:08:29.617442  352939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:08:29.638250  352939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:08:29.660651  352939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 02:08:29.673108  352939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:08:29.686330  352939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:08:29.686402  352939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:08:29.698226  352939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:08:29.714564  352939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:08:29.731774  352939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:08:29.749123  352939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:08:29.764335  352939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:08:29.782247  352939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:08:29.798815  352939 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:08:29.798882  352939 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:08:29.822156  352939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:08:29.834551  352939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:08:29.969444  352939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:08:30.008303  352939 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:08:30.008380  352939 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:08:30.015955  352939 retry.go:31] will retry after 1.106089673s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:08:31.122186  352939 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:08:31.129256  352939 start.go:543] Will wait 60s for crictl version
	I0229 02:08:31.129335  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:31.134618  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:08:31.178169  352939 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:08:31.178230  352939 ssh_runner.go:195] Run: containerd --version
	I0229 02:08:31.207758  352939 ssh_runner.go:195] Run: containerd --version
	I0229 02:08:31.239999  352939 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	I0229 02:08:31.241409  352939 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:08:31.244678  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:31.245143  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:08:31.245172  352939 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:08:31.245438  352939 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:08:31.251007  352939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:08:31.268390  352939 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 02:08:31.268477  352939 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:08:31.307900  352939 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:08:31.307977  352939 ssh_runner.go:195] Run: which lz4
	I0229 02:08:31.313067  352939 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 02:08:31.318785  352939 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:08:31.318824  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (440628646 bytes)
	I0229 02:08:33.476613  352939 containerd.go:548] Took 2.163567 seconds to copy over tarball
	I0229 02:08:33.476713  352939 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:08:36.830627  352939 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.353867564s)
	I0229 02:08:36.830664  352939 containerd.go:555] Took 3.354012 seconds to extract the tarball
	I0229 02:08:36.830693  352939 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:08:36.874068  352939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:08:37.019285  352939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:08:37.049557  352939 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:08:37.097479  352939 retry.go:31] will retry after 320.735607ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T02:08:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 02:08:37.419097  352939 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:08:37.479061  352939 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:08:37.479091  352939 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:08:37.479155  352939 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:08:37.479195  352939 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:08:37.479208  352939 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:08:37.479233  352939 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:08:37.479248  352939 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:08:37.479274  352939 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:08:37.479214  352939 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:08:37.479471  352939 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:08:37.480742  352939 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:08:37.480743  352939 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:08:37.480858  352939 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:08:37.481254  352939 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:08:37.481252  352939 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:08:37.481259  352939 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:08:37.481947  352939 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:08:37.482353  352939 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:08:37.623927  352939 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.16.0" and sha "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a"
	I0229 02:08:37.623993  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:37.626008  352939 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.16.0" and sha "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384"
	I0229 02:08:37.626068  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:37.632128  352939 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.3.15-0" and sha "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed"
	I0229 02:08:37.632185  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:37.641822  352939 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.1" and sha "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0229 02:08:37.641896  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:37.655435  352939 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.2" and sha "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b"
	I0229 02:08:37.655513  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:37.656381  352939 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.16.0" and sha "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e"
	I0229 02:08:37.656442  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:37.657742  352939 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.16.0" and sha "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d"
	I0229 02:08:37.657810  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:38.318335  352939 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 02:08:38.318426  352939 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:08:38.647838  352939 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.023819317s)
	I0229 02:08:38.647916  352939 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:08:38.647963  352939 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:08:38.648013  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:38.724848  352939 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.098754854s)
	I0229 02:08:38.724933  352939 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:08:38.724976  352939 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:08:38.725034  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:38.965573  352939 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.333362029s)
	I0229 02:08:38.965688  352939 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:08:38.965739  352939 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:08:38.965762  352939 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.323842657s)
	I0229 02:08:38.965791  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:38.965823  352939 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:08:38.965854  352939 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:08:38.965893  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:38.966394  352939 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.310858493s)
	I0229 02:08:38.966455  352939 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:08:38.966488  352939 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:08:38.966532  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:39.062145  352939 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.405666431s)
	I0229 02:08:39.062227  352939 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:08:39.062269  352939 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:08:39.062337  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:39.062683  352939 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.404851191s)
	I0229 02:08:39.062743  352939 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:08:39.062783  352939 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:08:39.062838  352939 ssh_runner.go:195] Run: which crictl
	I0229 02:08:39.164011  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:08:39.164078  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:08:39.164105  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:08:39.164127  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:08:39.164175  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:08:39.164226  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:08:39.164272  352939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:08:39.314119  352939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:08:39.317249  352939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:08:39.317327  352939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:08:39.317422  352939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:08:39.328054  352939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:08:39.333333  352939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:08:39.333394  352939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:08:39.333454  352939 cache_images.go:92] LoadImages completed in 1.854351218s
	W0229 02:08:39.333541  352939 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0229 02:08:39.333610  352939 ssh_runner.go:195] Run: sudo crictl info
	I0229 02:08:39.379973  352939 cni.go:84] Creating CNI manager for ""
	I0229 02:08:39.379998  352939 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:08:39.380017  352939 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:08:39.380036  352939 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-254968 NodeName:old-k8s-version-254968 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:08:39.380150  352939 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-254968"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-254968
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.250:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:08:39.380229  352939 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-254968 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:08:39.380295  352939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:08:39.393551  352939 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:08:39.393634  352939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:08:39.406098  352939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (444 bytes)
	I0229 02:08:39.426968  352939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:08:39.447451  352939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0229 02:08:39.468921  352939 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I0229 02:08:39.473614  352939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:08:39.488879  352939 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968 for IP: 192.168.50.250
	I0229 02:08:39.488915  352939 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:08:39.489086  352939 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 02:08:39.489141  352939 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 02:08:39.489198  352939 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/client.key
	I0229 02:08:39.489211  352939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/client.crt with IP's: []
	I0229 02:08:39.571039  352939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/client.crt ...
	I0229 02:08:39.571085  352939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/client.crt: {Name:mk3b48fc044cbf4166a29b169398e8ffef41e421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:08:39.571283  352939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/client.key ...
	I0229 02:08:39.571299  352939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/client.key: {Name:mka4ffc2ea8dd4024ec8836e683cf463febb3137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:08:39.571379  352939 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key.df55815c
	I0229 02:08:39.571394  352939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.crt.df55815c with IP's: [192.168.50.250 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:08:39.625133  352939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.crt.df55815c ...
	I0229 02:08:39.625161  352939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.crt.df55815c: {Name:mk40262f094f3c86f3f1856870cbfa804dddea5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:08:39.625354  352939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key.df55815c ...
	I0229 02:08:39.625382  352939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key.df55815c: {Name:mk502fba553b5476bec39cef43812c9ff221ecdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:08:39.625495  352939 certs.go:337] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.crt.df55815c -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.crt
	I0229 02:08:39.625609  352939 certs.go:341] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key.df55815c -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key
	I0229 02:08:39.625692  352939 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.key
	I0229 02:08:39.625714  352939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.crt with IP's: []
	I0229 02:08:39.773597  352939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.crt ...
	I0229 02:08:39.773630  352939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.crt: {Name:mk0b1a2d00889efa492481f95f395126771ebb3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:08:39.773791  352939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.key ...
	I0229 02:08:39.773804  352939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.key: {Name:mkd98616003546dd25d9820333335517d79ec453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:08:39.774001  352939 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 02:08:39.774051  352939 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 02:08:39.774064  352939 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:08:39.774123  352939 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:08:39.774165  352939 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:08:39.774217  352939 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 02:08:39.774283  352939 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:08:39.775012  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:08:39.805163  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:08:39.833878  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:08:39.863924  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:08:39.892808  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:08:39.921392  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:08:39.952271  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:08:39.981113  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:08:40.009299  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 02:08:40.038738  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:08:40.067962  352939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 02:08:40.097818  352939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:08:40.120277  352939 ssh_runner.go:195] Run: openssl version
	I0229 02:08:40.128190  352939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:08:40.144167  352939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:08:40.149418  352939 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:08:40.149475  352939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:08:40.155942  352939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:08:40.169070  352939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 02:08:40.181839  352939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 02:08:40.187619  352939 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 02:08:40.187688  352939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 02:08:40.194237  352939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 02:08:40.210184  352939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 02:08:40.224622  352939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 02:08:40.230005  352939 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 02:08:40.230068  352939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 02:08:40.237154  352939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:08:40.254231  352939 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:08:40.260177  352939 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:08:40.260243  352939 kubeadm.go:404] StartCluster: {Name:old-k8s-version-254968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:08:40.260328  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 02:08:40.260388  352939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:08:40.301647  352939 cri.go:89] found id: ""
	I0229 02:08:40.301730  352939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:08:40.315444  352939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:08:40.327983  352939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:08:40.342233  352939 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:08:40.342289  352939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:08:40.753101  352939 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:10:38.665460  352939 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:10:38.665588  352939 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:10:38.667087  352939 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:10:38.667157  352939 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:10:38.667277  352939 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:10:38.667442  352939 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:10:38.667573  352939 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:10:38.667696  352939 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:10:38.667808  352939 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:10:38.667878  352939 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:10:38.667972  352939 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:10:38.669608  352939 out.go:204]   - Generating certificates and keys ...
	I0229 02:10:38.669695  352939 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:10:38.669783  352939 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:10:38.669868  352939 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:10:38.669940  352939 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:10:38.670021  352939 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:10:38.670111  352939 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:10:38.670217  352939 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:10:38.670357  352939 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-254968 localhost] and IPs [192.168.50.250 127.0.0.1 ::1]
	I0229 02:10:38.670429  352939 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:10:38.670597  352939 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-254968 localhost] and IPs [192.168.50.250 127.0.0.1 ::1]
	I0229 02:10:38.670693  352939 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:10:38.670800  352939 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:10:38.670885  352939 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:10:38.670965  352939 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:10:38.671038  352939 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:10:38.671109  352939 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:10:38.671198  352939 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:10:38.671300  352939 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:10:38.671378  352939 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:10:38.672697  352939 out.go:204]   - Booting up control plane ...
	I0229 02:10:38.672805  352939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:10:38.672907  352939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:10:38.672980  352939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:10:38.673041  352939 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:10:38.673159  352939 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:10:38.673216  352939 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:10:38.673287  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:10:38.673447  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:10:38.673505  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:10:38.673655  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:10:38.673715  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:10:38.673903  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:10:38.673994  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:10:38.674280  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:10:38.674381  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:10:38.674538  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:10:38.674555  352939 kubeadm.go:322] 
	I0229 02:10:38.674594  352939 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:10:38.674625  352939 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:10:38.674632  352939 kubeadm.go:322] 
	I0229 02:10:38.674669  352939 kubeadm.go:322] This error is likely caused by:
	I0229 02:10:38.674722  352939 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:10:38.674876  352939 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:10:38.674889  352939 kubeadm.go:322] 
	I0229 02:10:38.675040  352939 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:10:38.675087  352939 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:10:38.675142  352939 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:10:38.675151  352939 kubeadm.go:322] 
	I0229 02:10:38.675288  352939 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:10:38.675405  352939 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:10:38.675496  352939 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:10:38.675538  352939 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:10:38.675595  352939 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:10:38.675678  352939 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 02:10:38.675769  352939 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-254968 localhost] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-254968 localhost] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-254968 localhost] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-254968 localhost] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:10:38.675831  352939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:10:39.150728  352939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:10:39.169953  352939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:10:39.182131  352939 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:10:39.182188  352939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:10:39.245826  352939 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:10:39.245910  352939 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:10:39.378672  352939 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:10:39.378844  352939 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:10:39.378984  352939 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:10:39.604172  352939 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:10:39.605540  352939 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:10:39.614968  352939 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:10:39.755999  352939 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:10:39.758361  352939 out.go:204]   - Generating certificates and keys ...
	I0229 02:10:39.758451  352939 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:10:39.758540  352939 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:10:39.758673  352939 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:10:39.758793  352939 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:10:39.758891  352939 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:10:39.758969  352939 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:10:39.759054  352939 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:10:39.759160  352939 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:10:39.759273  352939 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:10:39.759364  352939 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:10:39.759410  352939 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:10:39.759513  352939 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:10:39.903904  352939 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:10:40.171619  352939 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:10:40.697375  352939 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:10:40.876520  352939 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:10:40.877444  352939 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:10:40.879210  352939 out.go:204]   - Booting up control plane ...
	I0229 02:10:40.879350  352939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:10:40.884243  352939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:10:40.885655  352939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:10:40.887461  352939 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:10:40.890562  352939 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:11:20.892835  352939 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:11:20.893158  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:11:20.893446  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:11:25.895663  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:11:25.896046  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:11:35.896969  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:11:35.897227  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:11:55.898638  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:11:55.898914  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:12:35.898329  352939 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:12:35.898614  352939 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:12:35.898645  352939 kubeadm.go:322] 
	I0229 02:12:35.898718  352939 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:12:35.898788  352939 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:12:35.898798  352939 kubeadm.go:322] 
	I0229 02:12:35.898839  352939 kubeadm.go:322] This error is likely caused by:
	I0229 02:12:35.898872  352939 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:12:35.899001  352939 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:12:35.899017  352939 kubeadm.go:322] 
	I0229 02:12:35.899137  352939 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:12:35.899182  352939 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:12:35.899232  352939 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:12:35.899244  352939 kubeadm.go:322] 
	I0229 02:12:35.899391  352939 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:12:35.899515  352939 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:12:35.899625  352939 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:12:35.899700  352939 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:12:35.899797  352939 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:12:35.899849  352939 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:12:35.900789  352939 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:12:35.900906  352939 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:12:35.901010  352939 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:12:35.901065  352939 kubeadm.go:406] StartCluster complete in 3m55.640832895s
	I0229 02:12:35.901140  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:12:35.901212  352939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:12:35.952939  352939 cri.go:89] found id: ""
	I0229 02:12:35.952966  352939 logs.go:276] 0 containers: []
	W0229 02:12:35.952974  352939 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:12:35.952982  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:12:35.953049  352939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:12:35.988981  352939 cri.go:89] found id: ""
	I0229 02:12:35.989012  352939 logs.go:276] 0 containers: []
	W0229 02:12:35.989021  352939 logs.go:278] No container was found matching "etcd"
	I0229 02:12:35.989027  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:12:35.989081  352939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:12:36.022715  352939 cri.go:89] found id: ""
	I0229 02:12:36.022749  352939 logs.go:276] 0 containers: []
	W0229 02:12:36.022762  352939 logs.go:278] No container was found matching "coredns"
	I0229 02:12:36.022770  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:12:36.022852  352939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:12:36.059376  352939 cri.go:89] found id: ""
	I0229 02:12:36.059413  352939 logs.go:276] 0 containers: []
	W0229 02:12:36.059425  352939 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:12:36.059434  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:12:36.059501  352939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:12:36.098205  352939 cri.go:89] found id: ""
	I0229 02:12:36.098237  352939 logs.go:276] 0 containers: []
	W0229 02:12:36.098246  352939 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:12:36.098252  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:12:36.098331  352939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:12:36.138193  352939 cri.go:89] found id: ""
	I0229 02:12:36.138225  352939 logs.go:276] 0 containers: []
	W0229 02:12:36.138236  352939 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:12:36.138244  352939 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:12:36.138322  352939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:12:36.174400  352939 cri.go:89] found id: ""
	I0229 02:12:36.174427  352939 logs.go:276] 0 containers: []
	W0229 02:12:36.174436  352939 logs.go:278] No container was found matching "kindnet"
	I0229 02:12:36.174446  352939 logs.go:123] Gathering logs for kubelet ...
	I0229 02:12:36.174460  352939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:12:36.221302  352939 logs.go:123] Gathering logs for dmesg ...
	I0229 02:12:36.221336  352939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:12:36.236708  352939 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:12:36.236738  352939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:12:36.395030  352939 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:12:36.395059  352939 logs.go:123] Gathering logs for containerd ...
	I0229 02:12:36.395072  352939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:12:36.435335  352939 logs.go:123] Gathering logs for container status ...
	I0229 02:12:36.435368  352939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:12:36.475320  352939 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:12:36.475369  352939 out.go:239] * 
	* 
	W0229 02:12:36.475426  352939 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:12:36.475444  352939 out.go:239] * 
	* 
	W0229 02:12:36.476344  352939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:12:36.479302  352939 out.go:177] 
	W0229 02:12:36.480466  352939 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:12:36.480517  352939 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:12:36.480538  352939 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:12:36.481961  352939 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-254968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 6 (244.758111ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:12:36.762183  359884 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-254968" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-254968" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (295.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-254968 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-254968 create -f testdata/busybox.yaml: exit status 1 (45.198572ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-254968" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-254968 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 6 (250.00075ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:12:37.054652  359923 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-254968" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-254968" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 6 (252.903464ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:12:37.312495  359954 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-254968" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-254968" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-254968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-254968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m28.095187499s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-254968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-254968 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-254968 describe deploy/metrics-server -n kube-system: exit status 1 (45.819529ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-254968" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-254968 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 6 (254.10321ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:14:05.708648  360652 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-254968" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-254968" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (519.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-254968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0229 02:14:18.529917  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:18.535237  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:18.545480  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:18.565724  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:18.606212  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:18.686595  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:18.847096  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:19.167323  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:19.808244  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:19.892951  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:14:21.088560  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:23.648782  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:27.378584  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:14:28.597199  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 02:14:28.769108  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:36.434835  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:14:39.009704  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:14:59.490897  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-254968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 109 (8m36.861614206s)

                                                
                                                
-- stdout --
	* [old-k8s-version-254968] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node old-k8s-version-254968 in cluster old-k8s-version-254968
	* Restarting existing kvm2 VM for "old-k8s-version-254968" ...
	* Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:14:07.345681  360776 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:14:07.346177  360776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:14:07.346192  360776 out.go:304] Setting ErrFile to fd 2...
	I0229 02:14:07.346200  360776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:14:07.346679  360776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:14:07.347639  360776 out.go:298] Setting JSON to false
	I0229 02:14:07.348711  360776 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6992,"bootTime":1709165856,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:14:07.348780  360776 start.go:139] virtualization: kvm guest
	I0229 02:14:07.350707  360776 out.go:177] * [old-k8s-version-254968] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:14:07.352656  360776 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:14:07.352694  360776 notify.go:220] Checking for updates...
	I0229 02:14:07.353959  360776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:14:07.355621  360776 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:14:07.356966  360776 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:14:07.358350  360776 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:14:07.359655  360776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:14:07.361216  360776 config.go:182] Loaded profile config "old-k8s-version-254968": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 02:14:07.361644  360776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:14:07.361716  360776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:14:07.378050  360776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33227
	I0229 02:14:07.378541  360776 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:14:07.379112  360776 main.go:141] libmachine: Using API Version  1
	I0229 02:14:07.379147  360776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:14:07.379471  360776 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:14:07.379617  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:07.381230  360776 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:14:07.382368  360776 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:14:07.382631  360776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:14:07.382668  360776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:14:07.397177  360776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0229 02:14:07.397622  360776 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:14:07.398061  360776 main.go:141] libmachine: Using API Version  1
	I0229 02:14:07.398105  360776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:14:07.398415  360776 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:14:07.398624  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:07.431959  360776 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:14:07.433103  360776 start.go:299] selected driver: kvm2
	I0229 02:14:07.433117  360776 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-254968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:14:07.433216  360776 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:14:07.434014  360776 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:14:07.434123  360776 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:14:07.448872  360776 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:14:07.449196  360776 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:14:07.449275  360776 cni.go:84] Creating CNI manager for ""
	I0229 02:14:07.449289  360776 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:14:07.449298  360776 start_flags.go:323] config:
	{Name:old-k8s-version-254968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:14:07.449447  360776 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:14:07.452077  360776 out.go:177] * Starting control plane node old-k8s-version-254968 in cluster old-k8s-version-254968
	I0229 02:14:07.453385  360776 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 02:14:07.453416  360776 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 02:14:07.453424  360776 cache.go:56] Caching tarball of preloaded images
	I0229 02:14:07.453508  360776 preload.go:174] Found /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:14:07.453519  360776 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 02:14:07.453611  360776 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/config.json ...
	I0229 02:14:07.453800  360776 start.go:365] acquiring machines lock for old-k8s-version-254968: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:14:07.453839  360776 start.go:369] acquired machines lock for "old-k8s-version-254968" in 21.191µs
	I0229 02:14:07.453855  360776 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:14:07.453863  360776 fix.go:54] fixHost starting: 
	I0229 02:14:07.454143  360776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:14:07.454175  360776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:14:07.469136  360776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42969
	I0229 02:14:07.469581  360776 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:14:07.469981  360776 main.go:141] libmachine: Using API Version  1
	I0229 02:14:07.470006  360776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:14:07.470283  360776 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:14:07.470482  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:07.470634  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetState
	I0229 02:14:07.472246  360776 fix.go:102] recreateIfNeeded on old-k8s-version-254968: state=Stopped err=<nil>
	I0229 02:14:07.472272  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	W0229 02:14:07.472423  360776 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:14:07.474127  360776 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-254968" ...
	I0229 02:14:07.475259  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .Start
	I0229 02:14:07.475437  360776 main.go:141] libmachine: (old-k8s-version-254968) Ensuring networks are active...
	I0229 02:14:07.476237  360776 main.go:141] libmachine: (old-k8s-version-254968) Ensuring network default is active
	I0229 02:14:07.476601  360776 main.go:141] libmachine: (old-k8s-version-254968) Ensuring network mk-old-k8s-version-254968 is active
	I0229 02:14:07.477024  360776 main.go:141] libmachine: (old-k8s-version-254968) Getting domain xml...
	I0229 02:14:07.477858  360776 main.go:141] libmachine: (old-k8s-version-254968) Creating domain...
	I0229 02:14:08.697875  360776 main.go:141] libmachine: (old-k8s-version-254968) Waiting to get IP...
	I0229 02:14:08.698916  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:08.699387  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:08.699480  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:08.699377  360811 retry.go:31] will retry after 252.563782ms: waiting for machine to come up
	I0229 02:14:08.953885  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:08.954366  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:08.954394  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:08.954326  360811 retry.go:31] will retry after 249.40541ms: waiting for machine to come up
	I0229 02:14:09.205721  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:09.206269  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:09.206306  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:09.206204  360811 retry.go:31] will retry after 413.330358ms: waiting for machine to come up
	I0229 02:14:09.620815  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:09.621371  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:09.621399  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:09.621317  360811 retry.go:31] will retry after 494.000403ms: waiting for machine to come up
	I0229 02:14:10.116930  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:10.117460  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:10.117489  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:10.117409  360811 retry.go:31] will retry after 633.513616ms: waiting for machine to come up
	I0229 02:14:10.752791  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:10.753362  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:10.753389  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:10.753323  360811 retry.go:31] will retry after 689.613395ms: waiting for machine to come up
	I0229 02:14:11.444102  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:11.444567  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:11.444594  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:11.444520  360811 retry.go:31] will retry after 1.069080513s: waiting for machine to come up
	I0229 02:14:12.514804  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:12.515221  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:12.515254  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:12.515160  360811 retry.go:31] will retry after 1.003335002s: waiting for machine to come up
	I0229 02:14:13.520180  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:13.520704  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:13.520731  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:13.520661  360811 retry.go:31] will retry after 1.484489021s: waiting for machine to come up
	I0229 02:14:15.007250  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:15.007822  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:15.007853  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:15.007767  360811 retry.go:31] will retry after 1.478599161s: waiting for machine to come up
	I0229 02:14:16.488408  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:16.488964  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:16.488995  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:16.488900  360811 retry.go:31] will retry after 2.461061365s: waiting for machine to come up
	I0229 02:14:18.951358  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:18.952040  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:18.952069  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:18.951977  360811 retry.go:31] will retry after 2.754923415s: waiting for machine to come up
	I0229 02:14:21.710022  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:21.710550  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | unable to find current IP address of domain old-k8s-version-254968 in network mk-old-k8s-version-254968
	I0229 02:14:21.710590  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | I0229 02:14:21.710510  360811 retry.go:31] will retry after 3.509631579s: waiting for machine to come up
	I0229 02:14:25.223179  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.223718  360776 main.go:141] libmachine: (old-k8s-version-254968) Found IP for machine: 192.168.50.250
	I0229 02:14:25.223742  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has current primary IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.223765  360776 main.go:141] libmachine: (old-k8s-version-254968) Reserving static IP address...
	I0229 02:14:25.224193  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "old-k8s-version-254968", mac: "52:54:00:7f:e3:b1", ip: "192.168.50.250"} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.224239  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | skip adding static IP to network mk-old-k8s-version-254968 - found existing host DHCP lease matching {name: "old-k8s-version-254968", mac: "52:54:00:7f:e3:b1", ip: "192.168.50.250"}
	I0229 02:14:25.224260  360776 main.go:141] libmachine: (old-k8s-version-254968) Reserved static IP address: 192.168.50.250
	I0229 02:14:25.224278  360776 main.go:141] libmachine: (old-k8s-version-254968) Waiting for SSH to be available...
	I0229 02:14:25.224294  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | Getting to WaitForSSH function...
	I0229 02:14:25.226278  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.226573  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.226605  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.226730  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | Using SSH client type: external
	I0229 02:14:25.226756  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa (-rw-------)
	I0229 02:14:25.226787  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:14:25.226802  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | About to run SSH command:
	I0229 02:14:25.226820  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | exit 0
	I0229 02:14:25.358519  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | SSH cmd err, output: <nil>: 
	I0229 02:14:25.358897  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetConfigRaw
	I0229 02:14:25.359572  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:14:25.362411  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.362776  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.362806  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.363089  360776 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/config.json ...
	I0229 02:14:25.363311  360776 machine.go:88] provisioning docker machine ...
	I0229 02:14:25.363332  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:25.363569  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetMachineName
	I0229 02:14:25.363752  360776 buildroot.go:166] provisioning hostname "old-k8s-version-254968"
	I0229 02:14:25.363775  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetMachineName
	I0229 02:14:25.363931  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:25.366127  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.366578  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.366608  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.366806  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:14:25.366978  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:25.367160  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:25.367264  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:14:25.367498  360776 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:25.367698  360776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:14:25.367711  360776 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-254968 && echo "old-k8s-version-254968" | sudo tee /etc/hostname
	I0229 02:14:25.499636  360776 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-254968
	
	I0229 02:14:25.499669  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:25.502614  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.502953  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.502995  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.503201  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:14:25.503433  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:25.503578  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:25.503726  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:14:25.503927  360776 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:25.504190  360776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:14:25.504213  360776 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-254968' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-254968/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-254968' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:14:25.629567  360776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:14:25.629607  360776 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:14:25.629632  360776 buildroot.go:174] setting up certificates
	I0229 02:14:25.629641  360776 provision.go:83] configureAuth start
	I0229 02:14:25.629651  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetMachineName
	I0229 02:14:25.629976  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:14:25.632584  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.632910  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.632960  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.633171  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:25.635547  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.635915  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.635944  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.636031  360776 provision.go:138] copyHostCerts
	I0229 02:14:25.636092  360776 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:14:25.636113  360776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:14:25.636196  360776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:14:25.636344  360776 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:14:25.636358  360776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:14:25.636399  360776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:14:25.636482  360776 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:14:25.636492  360776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:14:25.636518  360776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:14:25.636563  360776 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-254968 san=[192.168.50.250 192.168.50.250 localhost 127.0.0.1 minikube old-k8s-version-254968]
	I0229 02:14:25.730709  360776 provision.go:172] copyRemoteCerts
	I0229 02:14:25.730783  360776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:14:25.730812  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:25.733274  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.733652  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.733683  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.733869  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:14:25.734106  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:25.734293  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:14:25.734469  360776 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:14:25.830614  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:14:25.859360  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:14:25.887640  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:14:25.917009  360776 provision.go:86] duration metric: configureAuth took 287.355723ms
	I0229 02:14:25.917039  360776 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:14:25.917243  360776 config.go:182] Loaded profile config "old-k8s-version-254968": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 02:14:25.917261  360776 machine.go:91] provisioned docker machine in 553.932747ms
	I0229 02:14:25.917272  360776 start.go:300] post-start starting for "old-k8s-version-254968" (driver="kvm2")
	I0229 02:14:25.917289  360776 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:14:25.917323  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:25.917671  360776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:14:25.917701  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:25.920586  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.920995  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:25.921029  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:25.921173  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:14:25.921348  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:25.921504  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:14:25.921618  360776 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:14:26.011565  360776 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:14:26.016924  360776 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:14:26.016951  360776 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:14:26.017015  360776 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:14:26.017117  360776 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:14:26.017241  360776 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:14:26.028356  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:14:26.057879  360776 start.go:303] post-start completed in 140.587167ms
	I0229 02:14:26.057909  360776 fix.go:56] fixHost completed within 18.604045581s
	I0229 02:14:26.057932  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:26.060850  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.061208  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:26.061262  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.061447  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:14:26.061672  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:26.061877  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:26.062017  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:14:26.062215  360776 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:26.062468  360776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0229 02:14:26.062480  360776 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:14:26.179484  360776 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172866.128791151
	
	I0229 02:14:26.179514  360776 fix.go:206] guest clock: 1709172866.128791151
	I0229 02:14:26.179522  360776 fix.go:219] Guest: 2024-02-29 02:14:26.128791151 +0000 UTC Remote: 2024-02-29 02:14:26.057913348 +0000 UTC m=+18.759569944 (delta=70.877803ms)
	I0229 02:14:26.179558  360776 fix.go:190] guest clock delta is within tolerance: 70.877803ms
	I0229 02:14:26.179563  360776 start.go:83] releasing machines lock for "old-k8s-version-254968", held for 18.725716424s
	I0229 02:14:26.179596  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:26.179897  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:14:26.182856  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.183208  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:26.183247  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.183422  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:26.183973  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:26.184166  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .DriverName
	I0229 02:14:26.184332  360776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:14:26.184377  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:26.184412  360776 ssh_runner.go:195] Run: cat /version.json
	I0229 02:14:26.184439  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHHostname
	I0229 02:14:26.187085  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.187389  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.187494  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:26.187524  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.187687  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:26.187688  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:14:26.187712  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:26.187844  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHPort
	I0229 02:14:26.187906  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:26.188038  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHKeyPath
	I0229 02:14:26.188093  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:14:26.188177  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetSSHUsername
	I0229 02:14:26.188244  360776 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:14:26.188283  360776 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/old-k8s-version-254968/id_rsa Username:docker}
	I0229 02:14:26.296231  360776 ssh_runner.go:195] Run: systemctl --version
	I0229 02:14:26.303547  360776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:14:26.309969  360776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:14:26.310036  360776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:14:26.329906  360776 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:14:26.329935  360776 start.go:475] detecting cgroup driver to use...
	I0229 02:14:26.330049  360776 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:14:26.358523  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:14:26.373296  360776 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:14:26.373358  360776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:14:26.388400  360776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:14:26.402767  360776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:14:26.517723  360776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:14:26.691450  360776 docker.go:233] disabling docker service ...
	I0229 02:14:26.691550  360776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:14:26.709346  360776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:14:26.725789  360776 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:14:26.877777  360776 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:14:27.014139  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:14:27.030559  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:14:27.051822  360776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 02:14:27.063518  360776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:14:27.074936  360776 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:14:27.075020  360776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:14:27.086802  360776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:14:27.098502  360776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:14:27.110086  360776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:14:27.122136  360776 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:14:27.134207  360776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:14:27.146226  360776 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:14:27.156353  360776 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:14:27.156423  360776 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:14:27.170906  360776 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:14:27.183233  360776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:14:27.337432  360776 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:14:27.372168  360776 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:14:27.372271  360776 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:14:27.377354  360776 retry.go:31] will retry after 1.318253634s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:14:28.696334  360776 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:14:28.702421  360776 start.go:543] Will wait 60s for crictl version
	I0229 02:14:28.702505  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:28.707198  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:14:28.751737  360776 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:14:28.751819  360776 ssh_runner.go:195] Run: containerd --version
	I0229 02:14:28.786332  360776 ssh_runner.go:195] Run: containerd --version
	I0229 02:14:28.818064  360776 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	I0229 02:14:28.819410  360776 main.go:141] libmachine: (old-k8s-version-254968) Calling .GetIP
	I0229 02:14:28.821971  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:28.822435  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e3:b1", ip: ""} in network mk-old-k8s-version-254968: {Iface:virbr2 ExpiryTime:2024-02-29 03:08:18 +0000 UTC Type:0 Mac:52:54:00:7f:e3:b1 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-254968 Clientid:01:52:54:00:7f:e3:b1}
	I0229 02:14:28.822467  360776 main.go:141] libmachine: (old-k8s-version-254968) DBG | domain old-k8s-version-254968 has defined IP address 192.168.50.250 and MAC address 52:54:00:7f:e3:b1 in network mk-old-k8s-version-254968
	I0229 02:14:28.822667  360776 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:14:28.827444  360776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:14:28.842815  360776 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 02:14:28.842908  360776 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:14:28.886225  360776 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:14:28.886296  360776 ssh_runner.go:195] Run: which lz4
	I0229 02:14:28.890996  360776 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 02:14:28.895688  360776 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:14:28.895716  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (440628646 bytes)
	I0229 02:14:30.787903  360776 containerd.go:548] Took 1.896928 seconds to copy over tarball
	I0229 02:14:30.788006  360776 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:14:33.940661  360776 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.152618386s)
	I0229 02:14:33.940702  360776 containerd.go:555] Took 3.152761 seconds to extract the tarball
	I0229 02:14:33.940715  360776 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:14:33.991384  360776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:14:34.119110  360776 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:14:34.152662  360776 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:14:34.210643  360776 retry.go:31] will retry after 258.76967ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T02:14:34Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 02:14:34.470240  360776 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:14:34.514690  360776 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:14:34.514735  360776 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:14:34.514809  360776 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:14:34.514848  360776 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:14:34.514920  360776 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:14:34.514932  360776 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:14:34.514853  360776 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:14:34.515118  360776 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:14:34.514878  360776 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:14:34.515508  360776 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:14:34.516959  360776 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:14:34.516972  360776 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:14:34.516981  360776 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:14:34.517003  360776 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:14:34.516962  360776 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:14:34.517103  360776 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:14:34.516978  360776 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:14:34.517128  360776 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:14:34.654553  360776 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.16.0" and sha "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384"
	I0229 02:14:34.654614  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:34.701356  360776 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.1" and sha "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0229 02:14:34.701420  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:34.788170  360776 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.3.15-0" and sha "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed"
	I0229 02:14:34.788248  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:34.788277  360776 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.16.0" and sha "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d"
	I0229 02:14:34.788343  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:34.788452  360776 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.16.0" and sha "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a"
	I0229 02:14:34.788493  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:34.791843  360776 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.16.0" and sha "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e"
	I0229 02:14:34.791885  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:34.798393  360776 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.2" and sha "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b"
	I0229 02:14:34.798431  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:34.841538  360776 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:14:34.841593  360776 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:14:34.841645  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:35.056346  360776 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:14:35.056400  360776 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:14:35.056452  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:35.352285  360776 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 02:14:35.352371  360776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 02:14:35.774162  360776 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:14:35.774298  360776 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:14:35.774388  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:35.774746  360776 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:14:35.774790  360776 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:14:35.774844  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:35.775346  360776 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:14:35.775382  360776 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:14:35.775417  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:35.776023  360776 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:14:35.776056  360776 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:14:35.776097  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:35.776573  360776 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:14:35.776603  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:14:35.776607  360776 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:14:35.776649  360776 ssh_runner.go:195] Run: which crictl
	I0229 02:14:35.776684  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:14:35.894510  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:14:35.894593  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:14:35.894643  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:14:35.894711  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:14:35.975518  360776 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:14:35.975584  360776 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:14:35.975654  360776 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:14:36.030734  360776 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:14:36.030832  360776 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:14:36.037699  360776 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:14:36.042127  360776 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:14:36.069648  360776 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:14:36.069705  360776 cache_images.go:92] LoadImages completed in 1.554953798s
	W0229 02:14:36.069809  360776 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0229 02:14:36.069871  360776 ssh_runner.go:195] Run: sudo crictl info
	I0229 02:14:36.112163  360776 cni.go:84] Creating CNI manager for ""
	I0229 02:14:36.112187  360776 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:14:36.112210  360776 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:14:36.112235  360776 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-254968 NodeName:old-k8s-version-254968 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:14:36.112407  360776 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-254968"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-254968
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.250:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:14:36.112527  360776 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-254968 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:14:36.112617  360776 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:14:36.123729  360776 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:14:36.123795  360776 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:14:36.133952  360776 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (444 bytes)
	I0229 02:14:36.153984  360776 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:14:36.173765  360776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0229 02:14:36.192890  360776 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I0229 02:14:36.197173  360776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:14:36.211011  360776 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968 for IP: 192.168.50.250
	I0229 02:14:36.211049  360776 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:14:36.211229  360776 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 02:14:36.211292  360776 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 02:14:36.211444  360776 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/client.key
	I0229 02:14:36.211514  360776 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key.df55815c
	I0229 02:14:36.211560  360776 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.key
	I0229 02:14:36.211746  360776 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 02:14:36.211798  360776 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 02:14:36.211814  360776 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:14:36.211853  360776 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:14:36.211885  360776 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:14:36.211915  360776 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 02:14:36.211976  360776 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:14:36.212631  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:14:36.243639  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:14:36.271363  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:14:36.300473  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/old-k8s-version-254968/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:14:36.327231  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:14:36.354031  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:14:36.383664  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:14:36.411225  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:14:36.440023  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:14:36.469743  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 02:14:36.498394  360776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 02:14:36.526238  360776 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:14:36.547455  360776 ssh_runner.go:195] Run: openssl version
	I0229 02:14:36.553904  360776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:14:36.565842  360776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:14:36.571204  360776 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:14:36.571280  360776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:14:36.577702  360776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:14:36.589451  360776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 02:14:36.600810  360776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 02:14:36.605979  360776 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 02:14:36.606026  360776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 02:14:36.612294  360776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 02:14:36.624290  360776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 02:14:36.636593  360776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 02:14:36.641902  360776 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 02:14:36.641970  360776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 02:14:36.648832  360776 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:14:36.661562  360776 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:14:36.667224  360776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:14:36.673838  360776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:14:36.680133  360776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:14:36.686336  360776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:14:36.692726  360776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:14:36.698801  360776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:14:36.704953  360776 kubeadm.go:404] StartCluster: {Name:old-k8s-version-254968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-254968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:14:36.705067  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 02:14:36.705118  360776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:14:36.744855  360776 cri.go:89] found id: ""
	I0229 02:14:36.744923  360776 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:14:36.755951  360776 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:14:36.755970  360776 kubeadm.go:636] restartCluster start
	I0229 02:14:36.756025  360776 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:14:36.766274  360776 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:36.767265  360776 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-254968" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:14:36.767904  360776 kubeconfig.go:146] "old-k8s-version-254968" context is missing from /home/jenkins/minikube-integration/18063-309085/kubeconfig - will repair!
	I0229 02:14:36.768820  360776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:14:36.770587  360776 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:14:36.781114  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:36.781164  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:36.794043  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:37.281618  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:37.281687  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:37.295291  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:37.782133  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:37.782240  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:37.796734  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:38.281214  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:38.281286  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:38.297874  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:38.781404  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:38.781483  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:38.795949  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:39.282031  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:39.282114  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:39.296793  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:39.781388  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:39.781452  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:39.796132  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:40.281249  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:40.281323  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:40.295489  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:40.782146  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:40.782239  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:40.796390  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:41.281495  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:41.281562  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:41.296322  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:41.781880  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:41.781956  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:41.796928  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:42.281233  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:42.281333  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:42.297112  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:42.781329  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:42.781429  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:42.794870  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:43.281406  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:43.281536  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:43.298909  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:43.782146  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:43.782226  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:43.795527  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:44.282138  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:44.282247  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:44.296736  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:44.781490  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:44.781566  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:44.795533  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:45.282136  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:45.282228  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:45.298642  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:45.781203  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:45.781284  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:45.795922  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:46.281441  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:46.281536  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:46.297195  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:46.781521  360776 api_server.go:166] Checking apiserver status ...
	I0229 02:14:46.781604  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:14:46.795870  360776 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:14:46.795904  360776 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:14:46.795913  360776 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:14:46.795923  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0229 02:14:46.795967  360776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:14:46.842762  360776 cri.go:89] found id: ""
	I0229 02:14:46.842844  360776 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:14:46.862286  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:14:46.872496  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:14:46.872579  360776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:14:46.883143  360776 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:14:46.883170  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:14:47.017549  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:14:48.191948  360776 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.174337415s)
	I0229 02:14:48.192007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:14:48.433274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:14:48.561260  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:14:48.672406  360776 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:14:48.672490  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:49.173011  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:49.672942  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:50.172840  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:50.673222  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:51.173238  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:51.672915  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:52.173570  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:52.673315  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:53.173610  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:53.672668  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:54.173108  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:54.673171  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:55.173251  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:55.672935  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:56.173060  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:56.673107  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:57.173192  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:57.672798  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.172654  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.673282  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.173312  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.672878  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.172953  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.673170  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.173005  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.672595  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:02.172649  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:02.673169  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.173251  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.672864  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.173580  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.173278  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.672747  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.173514  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.672853  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:07.173295  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:07.673496  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.173235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.672970  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.173203  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.672669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.172971  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.673523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.172857  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.672596  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:12.173541  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:12.673205  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.173523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.672774  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.173115  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.673616  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.172831  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.673160  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.673287  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:17.172640  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:17.672587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.173318  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.673512  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.673611  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.172605  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.173587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.673298  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:22.172625  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:22.672998  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.173387  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.673270  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.173552  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.673074  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.173423  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.673502  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.173531  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.672644  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:27.173372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:27.672738  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.173326  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.673063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.173178  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.673323  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.173306  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.673429  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.172889  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.672643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:32.173215  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:32.672712  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.172874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.672874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.173296  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.673021  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.172643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.672743  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.172648  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.673171  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.172582  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.672994  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.172969  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.673225  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.173291  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.673458  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.172766  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.672830  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.173174  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.672618  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:42.172606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:42.673016  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.173406  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.672843  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.173068  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.673562  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.172977  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.673254  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.172757  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.672796  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:47.173606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:47.673527  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.173283  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.673578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:48.673686  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:48.735531  360776 cri.go:89] found id: ""
	I0229 02:15:48.735560  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.735572  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:48.735580  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:48.735665  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:48.777775  360776 cri.go:89] found id: ""
	I0229 02:15:48.777801  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.777812  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:48.777819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:48.777893  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:48.816348  360776 cri.go:89] found id: ""
	I0229 02:15:48.816382  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.816391  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:48.816398  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:48.816466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:48.856576  360776 cri.go:89] found id: ""
	I0229 02:15:48.856627  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.856640  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:48.856648  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:48.856712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:48.896298  360776 cri.go:89] found id: ""
	I0229 02:15:48.896325  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.896333  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:48.896339  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:48.896419  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:48.939474  360776 cri.go:89] found id: ""
	I0229 02:15:48.939523  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.939537  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:48.939545  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:48.939609  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:48.979602  360776 cri.go:89] found id: ""
	I0229 02:15:48.979630  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.979642  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:48.979649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:48.979734  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:49.020455  360776 cri.go:89] found id: ""
	I0229 02:15:49.020485  360776 logs.go:276] 0 containers: []
	W0229 02:15:49.020495  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:49.020505  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:49.020517  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:49.070608  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:49.070653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:49.086878  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:49.086913  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:49.222506  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:49.222532  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:49.222565  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:49.261476  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:49.261507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:51.812576  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:51.828566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:51.828628  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:51.867885  360776 cri.go:89] found id: ""
	I0229 02:15:51.867913  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.867922  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:51.867928  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:51.867999  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:51.910828  360776 cri.go:89] found id: ""
	I0229 02:15:51.910862  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.910872  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:51.910879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:51.910928  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:51.951547  360776 cri.go:89] found id: ""
	I0229 02:15:51.951578  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.951590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:51.951598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:51.951683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:51.992485  360776 cri.go:89] found id: ""
	I0229 02:15:51.992511  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.992519  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:51.992525  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:51.992579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:52.036445  360776 cri.go:89] found id: ""
	I0229 02:15:52.036481  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.036494  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:52.036502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:52.036567  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:52.075247  360776 cri.go:89] found id: ""
	I0229 02:15:52.075279  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.075289  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:52.075298  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:52.075379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:52.117468  360776 cri.go:89] found id: ""
	I0229 02:15:52.117498  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.117507  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:52.117513  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:52.117575  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:52.156923  360776 cri.go:89] found id: ""
	I0229 02:15:52.156953  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.156962  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:52.156972  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:52.156984  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:52.209140  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:52.209181  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:52.224877  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:52.224952  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:52.313049  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:52.313079  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:52.313096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:52.361468  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:52.361520  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:54.934192  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:54.950604  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:54.950673  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:54.997665  360776 cri.go:89] found id: ""
	I0229 02:15:54.997700  360776 logs.go:276] 0 containers: []
	W0229 02:15:54.997713  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:54.997738  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:54.997824  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:55.043835  360776 cri.go:89] found id: ""
	I0229 02:15:55.043865  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.043878  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:55.043885  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:55.043952  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:55.084745  360776 cri.go:89] found id: ""
	I0229 02:15:55.084773  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.084784  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:55.084793  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:55.084857  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:55.126607  360776 cri.go:89] found id: ""
	I0229 02:15:55.126638  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.126652  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:55.126660  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:55.126723  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:55.168954  360776 cri.go:89] found id: ""
	I0229 02:15:55.168984  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.168997  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:55.169004  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:55.169068  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:55.209769  360776 cri.go:89] found id: ""
	I0229 02:15:55.209802  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.209813  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:55.209819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:55.209874  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:55.252174  360776 cri.go:89] found id: ""
	I0229 02:15:55.252206  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.252218  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:55.252226  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:55.252280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:55.301449  360776 cri.go:89] found id: ""
	I0229 02:15:55.301483  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.301496  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:55.301508  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:55.301524  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:55.406764  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:55.406785  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:55.406810  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:55.450166  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:55.450213  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:55.499652  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:55.499703  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:55.548616  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:55.548665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:58.064634  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:58.080287  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:58.080365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:58.119448  360776 cri.go:89] found id: ""
	I0229 02:15:58.119480  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.119492  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:58.119500  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:58.119563  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:58.159896  360776 cri.go:89] found id: ""
	I0229 02:15:58.159926  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.159937  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:58.159945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:58.160009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:58.197746  360776 cri.go:89] found id: ""
	I0229 02:15:58.197774  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.197785  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:58.197794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:58.197873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:58.242003  360776 cri.go:89] found id: ""
	I0229 02:15:58.242031  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.242043  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:58.242051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:58.242143  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:58.282762  360776 cri.go:89] found id: ""
	I0229 02:15:58.282795  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.282815  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:58.282823  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:58.282889  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:58.324333  360776 cri.go:89] found id: ""
	I0229 02:15:58.324364  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.324374  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:58.324380  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:58.324436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:58.392279  360776 cri.go:89] found id: ""
	I0229 02:15:58.392308  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.392321  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:58.392329  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:58.392390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:58.448147  360776 cri.go:89] found id: ""
	I0229 02:15:58.448181  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.448194  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:58.448211  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:58.448259  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:58.501620  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:58.501657  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:58.519453  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:58.519486  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:58.595868  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:58.595897  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:58.595917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:58.630969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:58.631004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.181602  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:01.196379  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:01.196456  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:01.237984  360776 cri.go:89] found id: ""
	I0229 02:16:01.238008  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.238019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:01.238028  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:01.238109  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:01.284709  360776 cri.go:89] found id: ""
	I0229 02:16:01.284737  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.284748  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:01.284756  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:01.284829  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:01.328675  360776 cri.go:89] found id: ""
	I0229 02:16:01.328711  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.328724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:01.328732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:01.328787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:01.384088  360776 cri.go:89] found id: ""
	I0229 02:16:01.384118  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.384127  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:01.384133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:01.384182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:01.444582  360776 cri.go:89] found id: ""
	I0229 02:16:01.444617  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.444630  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:01.444638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:01.444709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:01.483202  360776 cri.go:89] found id: ""
	I0229 02:16:01.483237  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.483250  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:01.483258  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:01.483327  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:01.520422  360776 cri.go:89] found id: ""
	I0229 02:16:01.520455  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.520467  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:01.520475  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:01.520545  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:01.558295  360776 cri.go:89] found id: ""
	I0229 02:16:01.558327  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.558336  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:01.558348  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:01.558363  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:01.594473  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:01.594508  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.640865  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:01.640906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:01.691693  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:01.691746  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:01.708474  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:01.708507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:01.788334  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:04.288565  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:04.304344  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:04.304435  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:04.364586  360776 cri.go:89] found id: ""
	I0229 02:16:04.364623  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.364635  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:04.364643  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:04.364712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:04.423593  360776 cri.go:89] found id: ""
	I0229 02:16:04.423624  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.423637  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:04.423646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:04.423715  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:04.463437  360776 cri.go:89] found id: ""
	I0229 02:16:04.463471  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.463482  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:04.463491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:04.463553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:04.500526  360776 cri.go:89] found id: ""
	I0229 02:16:04.500550  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.500559  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:04.500565  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:04.500646  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:04.541324  360776 cri.go:89] found id: ""
	I0229 02:16:04.541363  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.541376  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:04.541389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:04.541466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:04.586036  360776 cri.go:89] found id: ""
	I0229 02:16:04.586063  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.586071  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:04.586093  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:04.586221  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:04.624838  360776 cri.go:89] found id: ""
	I0229 02:16:04.624864  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.624873  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:04.624879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:04.624942  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:04.665188  360776 cri.go:89] found id: ""
	I0229 02:16:04.665214  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.665223  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:04.665235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:04.665248  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:04.710572  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:04.710608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:04.759440  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:04.759473  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:04.777220  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:04.777252  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:04.855773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:04.855802  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:04.855820  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:07.391235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:07.407347  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:07.407424  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:07.456950  360776 cri.go:89] found id: ""
	I0229 02:16:07.456978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.456988  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:07.456994  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:07.457056  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:07.501947  360776 cri.go:89] found id: ""
	I0229 02:16:07.501978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.501989  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:07.501996  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:07.502055  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:07.543248  360776 cri.go:89] found id: ""
	I0229 02:16:07.543283  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.543296  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:07.543303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:07.543369  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:07.580554  360776 cri.go:89] found id: ""
	I0229 02:16:07.580587  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.580599  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:07.580606  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:07.580674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:07.618930  360776 cri.go:89] found id: ""
	I0229 02:16:07.618955  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.618966  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:07.618974  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:07.619038  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:07.656206  360776 cri.go:89] found id: ""
	I0229 02:16:07.656237  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.656246  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:07.656252  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:07.656312  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:07.692225  360776 cri.go:89] found id: ""
	I0229 02:16:07.692255  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.692266  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:07.692273  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:07.692334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:07.728085  360776 cri.go:89] found id: ""
	I0229 02:16:07.728118  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.728130  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:07.728143  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:07.728161  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:07.744078  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:07.744102  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:07.819861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:07.819891  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:07.819906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:07.854665  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:07.854694  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:07.899029  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:07.899059  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.449274  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:10.466228  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:10.466305  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:10.516655  360776 cri.go:89] found id: ""
	I0229 02:16:10.516686  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.516699  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:10.516707  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:10.516776  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:10.551194  360776 cri.go:89] found id: ""
	I0229 02:16:10.551222  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.551240  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:10.551247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:10.551309  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:10.586984  360776 cri.go:89] found id: ""
	I0229 02:16:10.587012  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.587021  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:10.587033  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:10.587101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:10.631726  360776 cri.go:89] found id: ""
	I0229 02:16:10.631758  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.631768  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:10.631775  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:10.631831  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:10.673054  360776 cri.go:89] found id: ""
	I0229 02:16:10.673090  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.673102  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:10.673110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:10.673175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:10.716401  360776 cri.go:89] found id: ""
	I0229 02:16:10.716428  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.716437  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:10.716448  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:10.716495  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:10.762425  360776 cri.go:89] found id: ""
	I0229 02:16:10.762451  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.762460  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:10.762465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:10.762523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:10.800934  360776 cri.go:89] found id: ""
	I0229 02:16:10.800959  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.800970  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:10.800981  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:10.800995  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.851152  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:10.851178  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:10.865410  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:10.865436  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:10.941654  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:10.941679  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:10.941699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:10.977068  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:10.977099  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:13.524032  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:13.540646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:13.540721  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:13.584696  360776 cri.go:89] found id: ""
	I0229 02:16:13.584727  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.584740  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:13.584748  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:13.584819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:13.620800  360776 cri.go:89] found id: ""
	I0229 02:16:13.620843  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.620852  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:13.620858  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:13.620936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:13.659179  360776 cri.go:89] found id: ""
	I0229 02:16:13.659209  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.659218  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:13.659224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:13.659286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:13.695772  360776 cri.go:89] found id: ""
	I0229 02:16:13.695821  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.695832  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:13.695840  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:13.695902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:13.736870  360776 cri.go:89] found id: ""
	I0229 02:16:13.736895  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.736906  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:13.736913  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:13.736978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:13.782101  360776 cri.go:89] found id: ""
	I0229 02:16:13.782131  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.782143  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:13.782151  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:13.782212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:13.822638  360776 cri.go:89] found id: ""
	I0229 02:16:13.822663  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.822672  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:13.822677  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:13.822741  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:13.861761  360776 cri.go:89] found id: ""
	I0229 02:16:13.861787  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.861798  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:13.861811  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:13.861835  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:13.877464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:13.877494  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:13.955485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:13.955512  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:13.955525  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:13.990560  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:13.990594  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:14.037740  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:14.037780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:16.588097  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:16.603732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:16.603810  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:16.644337  360776 cri.go:89] found id: ""
	I0229 02:16:16.644372  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.644393  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:16.644404  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:16.644474  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:16.687530  360776 cri.go:89] found id: ""
	I0229 02:16:16.687562  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.687575  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:16.687584  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:16.687653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:16.728007  360776 cri.go:89] found id: ""
	I0229 02:16:16.728037  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.728054  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:16.728063  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:16.728125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:16.770904  360776 cri.go:89] found id: ""
	I0229 02:16:16.770952  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.770964  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:16.770973  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:16.771041  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:16.812270  360776 cri.go:89] found id: ""
	I0229 02:16:16.812294  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.812303  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:16.812309  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:16.812358  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:16.854461  360776 cri.go:89] found id: ""
	I0229 02:16:16.854487  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.854495  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:16.854502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:16.854565  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:16.893048  360776 cri.go:89] found id: ""
	I0229 02:16:16.893081  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.893093  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:16.893102  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:16.893175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:16.934533  360776 cri.go:89] found id: ""
	I0229 02:16:16.934565  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.934576  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:16.934589  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:16.934608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:16.949773  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:16.949806  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:17.030457  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:17.030483  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:17.030500  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:17.066911  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:17.066947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:17.141648  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:17.141680  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:19.697967  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:19.713729  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:19.713786  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:19.757898  360776 cri.go:89] found id: ""
	I0229 02:16:19.757929  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.757940  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:19.757947  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:19.757998  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:19.807621  360776 cri.go:89] found id: ""
	I0229 02:16:19.807644  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.807652  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:19.807658  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:19.807704  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:19.846030  360776 cri.go:89] found id: ""
	I0229 02:16:19.846060  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.846071  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:19.846089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:19.846157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:19.881842  360776 cri.go:89] found id: ""
	I0229 02:16:19.881870  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.881883  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:19.881892  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:19.881955  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:19.917791  360776 cri.go:89] found id: ""
	I0229 02:16:19.917818  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.917830  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:19.917837  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:19.917922  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:19.954147  360776 cri.go:89] found id: ""
	I0229 02:16:19.954174  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.954186  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:19.954194  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:19.954259  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:19.991466  360776 cri.go:89] found id: ""
	I0229 02:16:19.991495  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.991505  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:19.991512  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:19.991566  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:20.032484  360776 cri.go:89] found id: ""
	I0229 02:16:20.032515  360776 logs.go:276] 0 containers: []
	W0229 02:16:20.032526  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:20.032540  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:20.032556  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:20.084743  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:20.084781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:20.105586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:20.105626  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:20.206486  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:20.206513  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:20.206528  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:20.250720  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:20.250748  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:22.796158  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:22.812126  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:22.812208  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:22.849744  360776 cri.go:89] found id: ""
	I0229 02:16:22.849776  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.849792  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:22.849800  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:22.849865  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:22.891875  360776 cri.go:89] found id: ""
	I0229 02:16:22.891909  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.891921  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:22.891930  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:22.891995  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:22.931754  360776 cri.go:89] found id: ""
	I0229 02:16:22.931789  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.931801  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:22.931809  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:22.931878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:22.979291  360776 cri.go:89] found id: ""
	I0229 02:16:22.979322  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.979340  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:22.979349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:22.979437  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:23.028390  360776 cri.go:89] found id: ""
	I0229 02:16:23.028416  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.028424  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:23.028430  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:23.028498  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:23.077140  360776 cri.go:89] found id: ""
	I0229 02:16:23.077174  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.077187  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:23.077202  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:23.077274  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:23.124275  360776 cri.go:89] found id: ""
	I0229 02:16:23.124316  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.124326  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:23.124333  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:23.124386  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:23.188748  360776 cri.go:89] found id: ""
	I0229 02:16:23.188789  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.188801  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:23.188815  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:23.188833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:23.247833  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:23.247863  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:23.263866  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:23.263891  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:23.347825  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:23.347851  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:23.347869  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:23.383517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:23.383549  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:25.925662  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:25.940548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:25.940604  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:25.977087  360776 cri.go:89] found id: ""
	I0229 02:16:25.977107  360776 logs.go:276] 0 containers: []
	W0229 02:16:25.977116  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:25.977149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:25.977230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:26.018569  360776 cri.go:89] found id: ""
	I0229 02:16:26.018602  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.018615  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:26.018623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:26.018682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:26.057726  360776 cri.go:89] found id: ""
	I0229 02:16:26.057754  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.057773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:26.057782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:26.057838  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:26.097203  360776 cri.go:89] found id: ""
	I0229 02:16:26.097234  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.097247  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:26.097256  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:26.097322  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:26.141897  360776 cri.go:89] found id: ""
	I0229 02:16:26.141925  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.141941  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:26.141948  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:26.142009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:26.195074  360776 cri.go:89] found id: ""
	I0229 02:16:26.195101  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.195110  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:26.195117  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:26.195176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:26.252131  360776 cri.go:89] found id: ""
	I0229 02:16:26.252158  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.252166  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:26.252172  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:26.252249  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:26.292730  360776 cri.go:89] found id: ""
	I0229 02:16:26.292752  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.292760  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:26.292770  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:26.292781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:26.375138  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:26.375165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:26.375182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:26.410167  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:26.410196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:26.453622  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:26.453665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:26.503732  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:26.503762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:29.018838  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:29.034894  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:29.034963  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:29.086433  360776 cri.go:89] found id: ""
	I0229 02:16:29.086460  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.086472  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:29.086481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:29.086562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:29.134575  360776 cri.go:89] found id: ""
	I0229 02:16:29.134606  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.134619  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:29.134627  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:29.134701  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:29.186372  360776 cri.go:89] found id: ""
	I0229 02:16:29.186408  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.186420  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:29.186427  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:29.186481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:29.236276  360776 cri.go:89] found id: ""
	I0229 02:16:29.236299  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.236306  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:29.236312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:29.236361  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:29.280342  360776 cri.go:89] found id: ""
	I0229 02:16:29.280371  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.280380  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:29.280389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:29.280461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:29.325017  360776 cri.go:89] found id: ""
	I0229 02:16:29.325047  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.325059  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:29.325068  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:29.325139  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:29.367912  360776 cri.go:89] found id: ""
	I0229 02:16:29.367941  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.367951  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:29.367957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:29.368021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:29.404499  360776 cri.go:89] found id: ""
	I0229 02:16:29.404528  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.404538  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:29.404548  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:29.404562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:29.419724  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:29.419755  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:29.501923  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:29.501952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:29.501971  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:29.536724  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:29.536762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:29.579709  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:29.579744  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.129825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:32.147723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:32.147815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:32.206978  360776 cri.go:89] found id: ""
	I0229 02:16:32.207016  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.207030  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:32.207041  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:32.207140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:32.265296  360776 cri.go:89] found id: ""
	I0229 02:16:32.265328  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.265341  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:32.265350  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:32.265418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:32.312827  360776 cri.go:89] found id: ""
	I0229 02:16:32.312862  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.312874  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:32.312882  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:32.312946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:32.359988  360776 cri.go:89] found id: ""
	I0229 02:16:32.360024  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.360036  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:32.360045  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:32.360106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:32.400969  360776 cri.go:89] found id: ""
	I0229 02:16:32.401003  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.401015  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:32.401022  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:32.401075  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:32.437371  360776 cri.go:89] found id: ""
	I0229 02:16:32.437402  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.437411  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:32.437419  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:32.437491  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:32.481199  360776 cri.go:89] found id: ""
	I0229 02:16:32.481227  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.481238  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:32.481247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:32.481329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:32.528100  360776 cri.go:89] found id: ""
	I0229 02:16:32.528137  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.528150  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:32.528163  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:32.528180  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:32.565087  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:32.565122  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:32.616350  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:32.616382  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.669978  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:32.670015  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:32.684373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:32.684399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:32.769992  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:35.270148  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:35.289949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:35.290050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:35.334051  360776 cri.go:89] found id: ""
	I0229 02:16:35.334091  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.334103  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:35.334112  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:35.334170  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:35.378536  360776 cri.go:89] found id: ""
	I0229 02:16:35.378571  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.378585  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:35.378594  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:35.378660  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:35.417867  360776 cri.go:89] found id: ""
	I0229 02:16:35.417894  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.417905  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:35.417914  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:35.417982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:35.455848  360776 cri.go:89] found id: ""
	I0229 02:16:35.455874  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.455887  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:35.455896  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:35.455964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:35.494787  360776 cri.go:89] found id: ""
	I0229 02:16:35.494814  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.494822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:35.494828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:35.494890  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:35.533553  360776 cri.go:89] found id: ""
	I0229 02:16:35.533583  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.533592  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:35.533600  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:35.533669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:35.581381  360776 cri.go:89] found id: ""
	I0229 02:16:35.581412  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.581422  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:35.581429  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:35.581494  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:35.619128  360776 cri.go:89] found id: ""
	I0229 02:16:35.619158  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.619169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:35.619181  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:35.619197  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:35.655180  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:35.655216  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:35.701558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:35.701585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:35.753639  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:35.753672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:35.769711  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:35.769743  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:35.843861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:38.345063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:38.361259  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:38.361345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:38.405901  360776 cri.go:89] found id: ""
	I0229 02:16:38.405936  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.405949  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:38.405958  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:38.406027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:38.447860  360776 cri.go:89] found id: ""
	I0229 02:16:38.447894  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.447907  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:38.447915  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:38.447983  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:38.489711  360776 cri.go:89] found id: ""
	I0229 02:16:38.489737  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.489746  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:38.489752  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:38.489815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:38.527094  360776 cri.go:89] found id: ""
	I0229 02:16:38.527120  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.527128  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:38.527135  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:38.527202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:38.564125  360776 cri.go:89] found id: ""
	I0229 02:16:38.564165  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.564175  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:38.564183  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:38.564257  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:38.604355  360776 cri.go:89] found id: ""
	I0229 02:16:38.604385  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.604394  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:38.604401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:38.604471  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:38.642291  360776 cri.go:89] found id: ""
	I0229 02:16:38.642329  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.642338  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:38.642345  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:38.642425  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:38.684559  360776 cri.go:89] found id: ""
	I0229 02:16:38.684605  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.684617  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:38.684632  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:38.684646  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:38.735189  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:38.735230  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:38.750359  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:38.750388  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:38.832749  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:38.832777  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:38.832793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:38.871321  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:38.871355  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.429960  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:41.445002  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:41.445081  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:41.487833  360776 cri.go:89] found id: ""
	I0229 02:16:41.487867  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.487880  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:41.487889  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:41.487953  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:41.527667  360776 cri.go:89] found id: ""
	I0229 02:16:41.527691  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.527700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:41.527706  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:41.527767  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:41.568252  360776 cri.go:89] found id: ""
	I0229 02:16:41.568279  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.568289  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:41.568295  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:41.568347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:41.606664  360776 cri.go:89] found id: ""
	I0229 02:16:41.606697  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.606709  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:41.606717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:41.606787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:41.643384  360776 cri.go:89] found id: ""
	I0229 02:16:41.643413  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.643425  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:41.643433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:41.643488  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:41.685132  360776 cri.go:89] found id: ""
	I0229 02:16:41.685165  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.685179  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:41.685188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:41.685255  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:41.725844  360776 cri.go:89] found id: ""
	I0229 02:16:41.725874  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.725888  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:41.725901  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:41.725959  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:41.764651  360776 cri.go:89] found id: ""
	I0229 02:16:41.764684  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.764710  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:41.764728  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:41.764745  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:41.846499  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:41.846520  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:41.846534  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:41.889415  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:41.889454  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.955514  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:41.955554  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:42.011187  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:42.011231  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:44.528746  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:44.544657  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:44.544735  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:44.584593  360776 cri.go:89] found id: ""
	I0229 02:16:44.584619  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.584628  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:44.584634  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:44.584703  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:44.621819  360776 cri.go:89] found id: ""
	I0229 02:16:44.621851  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.621863  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:44.621870  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:44.621936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:44.661908  360776 cri.go:89] found id: ""
	I0229 02:16:44.661939  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.661951  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:44.661959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:44.662042  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:44.703135  360776 cri.go:89] found id: ""
	I0229 02:16:44.703168  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.703179  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:44.703186  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:44.703256  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:44.742783  360776 cri.go:89] found id: ""
	I0229 02:16:44.742812  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.742823  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:44.742831  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:44.742900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:44.786223  360776 cri.go:89] found id: ""
	I0229 02:16:44.786258  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.786271  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:44.786280  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:44.786348  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:44.832269  360776 cri.go:89] found id: ""
	I0229 02:16:44.832295  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.832304  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:44.832312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:44.832371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:44.882497  360776 cri.go:89] found id: ""
	I0229 02:16:44.882529  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.882541  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:44.882554  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:44.882572  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:44.898452  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:44.898484  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:44.988062  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:44.988089  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:44.988106  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:45.025317  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:45.025353  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:45.069804  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:45.069843  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:47.621890  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:47.636506  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:47.636572  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:47.679975  360776 cri.go:89] found id: ""
	I0229 02:16:47.680007  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.680019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:47.680026  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:47.680099  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:47.720573  360776 cri.go:89] found id: ""
	I0229 02:16:47.720604  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.720616  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:47.720628  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:47.720693  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:47.762211  360776 cri.go:89] found id: ""
	I0229 02:16:47.762239  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.762256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:47.762264  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:47.762325  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:47.801703  360776 cri.go:89] found id: ""
	I0229 02:16:47.801726  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.801736  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:47.801745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:47.801804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:47.843036  360776 cri.go:89] found id: ""
	I0229 02:16:47.843065  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.843074  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:47.843087  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:47.843137  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:47.901986  360776 cri.go:89] found id: ""
	I0229 02:16:47.902016  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.902029  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:47.902037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:47.902115  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:47.970578  360776 cri.go:89] found id: ""
	I0229 02:16:47.970626  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.970638  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:47.970646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:47.970727  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:48.008245  360776 cri.go:89] found id: ""
	I0229 02:16:48.008280  360776 logs.go:276] 0 containers: []
	W0229 02:16:48.008290  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:48.008303  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:48.008318  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:48.059243  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:48.059277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:48.109287  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:48.109328  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:48.124720  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:48.124747  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:48.201686  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:48.201734  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:48.201750  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:50.740237  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:50.755100  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:50.755174  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:50.799284  360776 cri.go:89] found id: ""
	I0229 02:16:50.799304  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.799312  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:50.799318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:50.799367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:50.863582  360776 cri.go:89] found id: ""
	I0229 02:16:50.863617  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.863630  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:50.863638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:50.863709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:50.913067  360776 cri.go:89] found id: ""
	I0229 02:16:50.913097  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.913107  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:50.913114  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:50.913181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:50.964343  360776 cri.go:89] found id: ""
	I0229 02:16:50.964372  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.964381  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:50.964387  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:50.964443  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:51.008180  360776 cri.go:89] found id: ""
	I0229 02:16:51.008215  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.008226  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:51.008234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:51.008314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:51.050574  360776 cri.go:89] found id: ""
	I0229 02:16:51.050604  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.050613  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:51.050619  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:51.050682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:51.094144  360776 cri.go:89] found id: ""
	I0229 02:16:51.094170  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.094180  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:51.094187  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:51.094254  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:51.133928  360776 cri.go:89] found id: ""
	I0229 02:16:51.133963  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.133976  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:51.133989  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:51.134005  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:51.169857  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:51.169888  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:51.211739  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:51.211774  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:51.267237  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:51.267277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:51.285167  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:51.285200  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:51.361051  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:53.861859  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:53.879047  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:53.879124  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:53.931722  360776 cri.go:89] found id: ""
	I0229 02:16:53.931751  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.931761  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:53.931770  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:53.931843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:53.989223  360776 cri.go:89] found id: ""
	I0229 02:16:53.989250  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.989259  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:53.989266  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:53.989316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:54.029340  360776 cri.go:89] found id: ""
	I0229 02:16:54.029367  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.029379  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:54.029394  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:54.029455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:54.065032  360776 cri.go:89] found id: ""
	I0229 02:16:54.065061  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.065072  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:54.065081  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:54.065148  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:54.103739  360776 cri.go:89] found id: ""
	I0229 02:16:54.103771  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.103783  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:54.103791  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:54.103886  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:54.146653  360776 cri.go:89] found id: ""
	I0229 02:16:54.146706  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.146720  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:54.146728  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:54.146804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:54.183885  360776 cri.go:89] found id: ""
	I0229 02:16:54.183909  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.183917  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:54.183923  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:54.183985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:54.223712  360776 cri.go:89] found id: ""
	I0229 02:16:54.223739  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.223748  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:54.223758  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:54.223776  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:54.239418  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:54.239443  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:54.316236  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:54.316262  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:54.316278  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:54.351899  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:54.351933  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:54.396954  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:54.396990  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:56.949058  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:56.965888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:56.965966  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:57.010067  360776 cri.go:89] found id: ""
	I0229 02:16:57.010114  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.010127  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:57.010136  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:57.010199  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:57.048082  360776 cri.go:89] found id: ""
	I0229 02:16:57.048108  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.048116  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:57.048123  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:57.048172  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:57.082859  360776 cri.go:89] found id: ""
	I0229 02:16:57.082890  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.082903  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:57.082910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:57.082971  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:57.118291  360776 cri.go:89] found id: ""
	I0229 02:16:57.118321  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.118331  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:57.118338  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:57.118396  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:57.155920  360776 cri.go:89] found id: ""
	I0229 02:16:57.155945  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.155954  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:57.155960  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:57.156007  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:57.198460  360776 cri.go:89] found id: ""
	I0229 02:16:57.198494  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.198503  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:57.198515  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:57.198576  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:57.239178  360776 cri.go:89] found id: ""
	I0229 02:16:57.239206  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.239214  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:57.239220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:57.239267  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:57.280933  360776 cri.go:89] found id: ""
	I0229 02:16:57.280964  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.280977  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:57.280988  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:57.281004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:57.341023  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:57.341056  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:57.356053  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:57.356083  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:57.435017  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:57.435040  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:57.435057  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:57.472428  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:57.472461  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:00.020707  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:00.035406  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:00.035476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:00.072190  360776 cri.go:89] found id: ""
	I0229 02:17:00.072222  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.072231  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:00.072237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:00.072289  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:00.108829  360776 cri.go:89] found id: ""
	I0229 02:17:00.108857  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.108868  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:00.108875  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:00.108927  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:00.143429  360776 cri.go:89] found id: ""
	I0229 02:17:00.143450  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.143459  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:00.143465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:00.143512  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:00.180428  360776 cri.go:89] found id: ""
	I0229 02:17:00.180456  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.180467  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:00.180496  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:00.180564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:00.220115  360776 cri.go:89] found id: ""
	I0229 02:17:00.220143  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.220155  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:00.220163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:00.220220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:00.258851  360776 cri.go:89] found id: ""
	I0229 02:17:00.258877  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.258887  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:00.258895  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:00.258982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:00.304148  360776 cri.go:89] found id: ""
	I0229 02:17:00.304174  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.304185  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:00.304193  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:00.304277  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:00.345893  360776 cri.go:89] found id: ""
	I0229 02:17:00.345923  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.345935  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:00.345950  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:00.345965  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:00.395977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:00.396006  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:00.410948  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:00.410970  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:00.485724  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:00.485745  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:00.485760  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:00.520496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:00.520531  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.065669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:03.081434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:03.081496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:03.118752  360776 cri.go:89] found id: ""
	I0229 02:17:03.118779  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.118788  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:03.118794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:03.118870  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:03.156172  360776 cri.go:89] found id: ""
	I0229 02:17:03.156197  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.156209  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:03.156216  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:03.156285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:03.190792  360776 cri.go:89] found id: ""
	I0229 02:17:03.190815  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.190823  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:03.190829  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:03.190885  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:03.229692  360776 cri.go:89] found id: ""
	I0229 02:17:03.229721  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.229733  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:03.229741  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:03.229800  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:03.271014  360776 cri.go:89] found id: ""
	I0229 02:17:03.271044  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.271053  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:03.271058  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:03.271118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:03.315291  360776 cri.go:89] found id: ""
	I0229 02:17:03.315316  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.315325  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:03.315332  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:03.315390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:03.354974  360776 cri.go:89] found id: ""
	I0229 02:17:03.354998  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.355007  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:03.355014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:03.355091  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:03.394044  360776 cri.go:89] found id: ""
	I0229 02:17:03.394074  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.394101  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:03.394120  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:03.394138  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:03.430131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:03.430164  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.472760  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:03.472793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:03.522797  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:03.522837  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:03.538642  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:03.538672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:03.611189  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.112319  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:06.126843  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:06.126924  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:06.171970  360776 cri.go:89] found id: ""
	I0229 02:17:06.171995  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.172005  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:06.172011  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:06.172060  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:06.208082  360776 cri.go:89] found id: ""
	I0229 02:17:06.208114  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.208126  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:06.208133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:06.208211  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:06.246429  360776 cri.go:89] found id: ""
	I0229 02:17:06.246454  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.246465  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:06.246472  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:06.246521  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:06.286908  360776 cri.go:89] found id: ""
	I0229 02:17:06.286941  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.286952  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:06.286959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:06.287036  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:06.330632  360776 cri.go:89] found id: ""
	I0229 02:17:06.330664  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.330707  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:06.330720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:06.330793  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:06.368385  360776 cri.go:89] found id: ""
	I0229 02:17:06.368412  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.368423  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:06.368431  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:06.368499  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:06.407424  360776 cri.go:89] found id: ""
	I0229 02:17:06.407456  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.407468  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:06.407476  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:06.407542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:06.447043  360776 cri.go:89] found id: ""
	I0229 02:17:06.447072  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.447084  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:06.447098  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:06.447119  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:06.501604  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:06.501639  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:06.516247  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:06.516274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:06.593087  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.593112  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:06.593126  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:06.633057  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:06.633097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:09.202624  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:09.218424  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:09.218496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:09.264508  360776 cri.go:89] found id: ""
	I0229 02:17:09.264538  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.264551  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:09.264560  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:09.264652  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:09.304507  360776 cri.go:89] found id: ""
	I0229 02:17:09.304536  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.304547  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:09.304555  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:09.304619  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:09.354779  360776 cri.go:89] found id: ""
	I0229 02:17:09.354802  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.354811  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:09.354817  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:09.354866  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:09.390031  360776 cri.go:89] found id: ""
	I0229 02:17:09.390065  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.390097  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:09.390106  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:09.390182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:09.435618  360776 cri.go:89] found id: ""
	I0229 02:17:09.435652  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.435666  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:09.435674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:09.435757  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:09.479110  360776 cri.go:89] found id: ""
	I0229 02:17:09.479142  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.479154  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:09.479163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:09.479236  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:09.520748  360776 cri.go:89] found id: ""
	I0229 02:17:09.520781  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.520794  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:09.520802  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:09.520879  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:09.561536  360776 cri.go:89] found id: ""
	I0229 02:17:09.561576  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.561590  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:09.561611  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:09.561628  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:09.621631  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:09.621678  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:09.640562  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:09.640607  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:09.727979  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:09.728001  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:09.728013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:09.766305  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:09.766340  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:12.312841  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:12.329745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:12.329826  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:12.376185  360776 cri.go:89] found id: ""
	I0229 02:17:12.376218  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.376230  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:12.376240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:12.376317  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:12.417025  360776 cri.go:89] found id: ""
	I0229 02:17:12.417059  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.417068  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:12.417080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:12.417162  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:12.458973  360776 cri.go:89] found id: ""
	I0229 02:17:12.459018  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.459040  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:12.459048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:12.459116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:12.500063  360776 cri.go:89] found id: ""
	I0229 02:17:12.500090  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.500102  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:12.500110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:12.500177  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:12.543182  360776 cri.go:89] found id: ""
	I0229 02:17:12.543213  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.543225  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:12.543234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:12.543296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:12.584725  360776 cri.go:89] found id: ""
	I0229 02:17:12.584773  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.584796  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:12.584804  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:12.584873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:12.634212  360776 cri.go:89] found id: ""
	I0229 02:17:12.634244  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.634256  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:12.634263  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:12.634330  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:12.686103  360776 cri.go:89] found id: ""
	I0229 02:17:12.686134  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.686144  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:12.686154  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:12.686168  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:12.753950  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:12.753999  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:12.769400  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:12.769430  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:12.856362  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:12.856390  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:12.856408  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:12.893238  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:12.893274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:15.439069  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:15.455698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:15.455779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:15.501222  360776 cri.go:89] found id: ""
	I0229 02:17:15.501248  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.501262  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:15.501269  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:15.501331  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:15.544580  360776 cri.go:89] found id: ""
	I0229 02:17:15.544610  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.544623  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:15.544632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:15.544697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:15.587250  360776 cri.go:89] found id: ""
	I0229 02:17:15.587301  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.587314  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:15.587322  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:15.587392  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:15.660189  360776 cri.go:89] found id: ""
	I0229 02:17:15.660214  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.660223  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:15.660229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:15.660280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:15.715100  360776 cri.go:89] found id: ""
	I0229 02:17:15.715126  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.715136  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:15.715142  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:15.715203  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:15.758998  360776 cri.go:89] found id: ""
	I0229 02:17:15.759028  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.759047  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:15.759053  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:15.759118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:15.801175  360776 cri.go:89] found id: ""
	I0229 02:17:15.801203  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.801215  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:15.801224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:15.801294  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:15.849643  360776 cri.go:89] found id: ""
	I0229 02:17:15.849678  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.849690  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:15.849704  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:15.849724  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:15.864824  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:15.864856  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:15.937271  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:15.937299  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:15.937313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:15.976404  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:15.976448  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:16.025658  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:16.025697  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:18.574763  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:18.593695  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:18.593802  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:18.641001  360776 cri.go:89] found id: ""
	I0229 02:17:18.641033  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.641042  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:18.641048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:18.641106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:18.701580  360776 cri.go:89] found id: ""
	I0229 02:17:18.701608  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.701617  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:18.701623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:18.701674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:18.742596  360776 cri.go:89] found id: ""
	I0229 02:17:18.742632  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.742642  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:18.742649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:18.742712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:18.782404  360776 cri.go:89] found id: ""
	I0229 02:17:18.782432  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.782443  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:18.782451  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:18.782516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:18.826221  360776 cri.go:89] found id: ""
	I0229 02:17:18.826250  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.826262  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:18.826270  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:18.826354  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:18.864698  360776 cri.go:89] found id: ""
	I0229 02:17:18.864737  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.864746  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:18.864766  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:18.864819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:18.902681  360776 cri.go:89] found id: ""
	I0229 02:17:18.902708  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.902718  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:18.902723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:18.902835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:18.942178  360776 cri.go:89] found id: ""
	I0229 02:17:18.942203  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.942213  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:18.942223  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:18.942236  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:18.983914  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:18.983947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:19.041670  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:19.041710  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:19.057445  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:19.057475  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:19.128946  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:19.128974  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:19.129007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:21.664806  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:21.680938  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:21.681037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:21.737776  360776 cri.go:89] found id: ""
	I0229 02:17:21.737808  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.737825  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:21.737833  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:21.737913  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:21.778917  360776 cri.go:89] found id: ""
	I0229 02:17:21.778951  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.778962  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:21.778969  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:21.779033  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:21.819099  360776 cri.go:89] found id: ""
	I0229 02:17:21.819127  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.819139  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:21.819147  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:21.819230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:21.861290  360776 cri.go:89] found id: ""
	I0229 02:17:21.861323  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.861334  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:21.861342  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:21.861406  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:21.900886  360776 cri.go:89] found id: ""
	I0229 02:17:21.900926  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.900938  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:21.900946  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:21.901021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:21.943023  360776 cri.go:89] found id: ""
	I0229 02:17:21.943060  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.943072  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:21.943080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:21.943145  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:21.984305  360776 cri.go:89] found id: ""
	I0229 02:17:21.984341  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.984352  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:21.984360  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:21.984428  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:22.025326  360776 cri.go:89] found id: ""
	I0229 02:17:22.025356  360776 logs.go:276] 0 containers: []
	W0229 02:17:22.025368  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:22.025382  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:22.025398  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:22.074977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:22.075020  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:22.092483  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:22.092518  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:22.171791  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:22.171814  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:22.171833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:22.211794  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:22.211850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:24.758800  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:24.773418  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:24.773501  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:24.819487  360776 cri.go:89] found id: ""
	I0229 02:17:24.819520  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.819531  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:24.819540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:24.819605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:24.859906  360776 cri.go:89] found id: ""
	I0229 02:17:24.859938  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.859949  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:24.859957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:24.860022  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:24.897499  360776 cri.go:89] found id: ""
	I0229 02:17:24.897531  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.897540  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:24.897547  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:24.897622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:24.935346  360776 cri.go:89] found id: ""
	I0229 02:17:24.935380  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.935393  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:24.935401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:24.935468  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:24.973567  360776 cri.go:89] found id: ""
	I0229 02:17:24.973591  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.973600  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:24.973605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:24.973657  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:25.016166  360776 cri.go:89] found id: ""
	I0229 02:17:25.016198  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.016210  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:25.016217  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:25.016285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:25.059944  360776 cri.go:89] found id: ""
	I0229 02:17:25.059977  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.059991  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:25.059999  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:25.060057  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:25.101594  360776 cri.go:89] found id: ""
	I0229 02:17:25.101627  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.101639  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:25.101652  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:25.101672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:25.183940  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:25.183988  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:25.184007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:25.219286  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:25.219327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:25.267048  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:25.267107  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:25.320969  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:25.320998  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:27.846314  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:27.861349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:27.861416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:27.901126  360776 cri.go:89] found id: ""
	I0229 02:17:27.901153  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.901162  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:27.901169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:27.901220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:27.942692  360776 cri.go:89] found id: ""
	I0229 02:17:27.942725  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.942738  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:27.942745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:27.942803  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:27.978891  360776 cri.go:89] found id: ""
	I0229 02:17:27.978919  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.978928  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:27.978934  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:27.978991  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:28.019688  360776 cri.go:89] found id: ""
	I0229 02:17:28.019723  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.019735  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:28.019743  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:28.019799  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:28.056414  360776 cri.go:89] found id: ""
	I0229 02:17:28.056438  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.056451  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:28.056457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:28.056504  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:28.093691  360776 cri.go:89] found id: ""
	I0229 02:17:28.093727  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.093739  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:28.093747  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:28.093806  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:28.130737  360776 cri.go:89] found id: ""
	I0229 02:17:28.130761  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.130768  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:28.130774  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:28.130828  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:28.167783  360776 cri.go:89] found id: ""
	I0229 02:17:28.167810  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.167820  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:28.167832  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:28.167850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:28.248054  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:28.248080  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:28.248096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:28.284935  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:28.284963  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:28.328563  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:28.328605  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:28.379372  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:28.379412  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:30.896570  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:30.912070  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:30.912140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:30.951633  360776 cri.go:89] found id: ""
	I0229 02:17:30.951662  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.951674  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:30.951681  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:30.951725  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:30.988094  360776 cri.go:89] found id: ""
	I0229 02:17:30.988121  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.988133  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:30.988141  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:30.988197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:31.025379  360776 cri.go:89] found id: ""
	I0229 02:17:31.025405  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.025416  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:31.025423  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:31.025476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:31.064070  360776 cri.go:89] found id: ""
	I0229 02:17:31.064100  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.064112  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:31.064120  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:31.064178  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:31.106455  360776 cri.go:89] found id: ""
	I0229 02:17:31.106487  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.106498  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:31.106505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:31.106564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:31.141789  360776 cri.go:89] found id: ""
	I0229 02:17:31.141819  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.141830  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:31.141838  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:31.141985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:31.181781  360776 cri.go:89] found id: ""
	I0229 02:17:31.181807  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.181815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:31.181820  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:31.181877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:31.222653  360776 cri.go:89] found id: ""
	I0229 02:17:31.222687  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.222700  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:31.222713  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:31.222730  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:31.272067  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:31.272100  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:31.287890  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:31.287917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:31.370516  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:31.370545  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:31.370559  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:31.416216  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:31.416257  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:33.976724  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:33.991119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:33.991202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:34.038632  360776 cri.go:89] found id: ""
	I0229 02:17:34.038659  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.038668  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:34.038674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:34.038744  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:34.076069  360776 cri.go:89] found id: ""
	I0229 02:17:34.076109  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.076120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:34.076128  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:34.076212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:34.122220  360776 cri.go:89] found id: ""
	I0229 02:17:34.122246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.122256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:34.122265  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:34.122329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:34.163216  360776 cri.go:89] found id: ""
	I0229 02:17:34.163246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.163259  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:34.163268  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:34.163337  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:34.206631  360776 cri.go:89] found id: ""
	I0229 02:17:34.206679  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.206691  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:34.206698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:34.206766  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:34.250992  360776 cri.go:89] found id: ""
	I0229 02:17:34.251024  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.251037  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:34.251048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:34.251116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:34.289582  360776 cri.go:89] found id: ""
	I0229 02:17:34.289609  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.289620  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:34.289626  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:34.289690  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:34.335130  360776 cri.go:89] found id: ""
	I0229 02:17:34.335158  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.335169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:34.335182  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:34.335198  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:34.365870  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:34.365920  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:34.462536  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:34.462567  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:34.462585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:34.500235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:34.500281  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:34.551106  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:34.551146  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.104547  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:37.123303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:37.123367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:37.164350  360776 cri.go:89] found id: ""
	I0229 02:17:37.164378  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.164391  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:37.164401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:37.164466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:37.209965  360776 cri.go:89] found id: ""
	I0229 02:17:37.210000  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.210014  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:37.210023  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:37.210125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:37.253162  360776 cri.go:89] found id: ""
	I0229 02:17:37.253192  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.253205  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:37.253213  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:37.253293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:37.300836  360776 cri.go:89] found id: ""
	I0229 02:17:37.300862  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.300872  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:37.300880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:37.300944  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:37.343546  360776 cri.go:89] found id: ""
	I0229 02:17:37.343573  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.343585  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:37.343598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:37.343669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:37.407526  360776 cri.go:89] found id: ""
	I0229 02:17:37.407554  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.407567  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:37.407574  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:37.407642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:37.486848  360776 cri.go:89] found id: ""
	I0229 02:17:37.486890  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.486902  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:37.486910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:37.486978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:37.529152  360776 cri.go:89] found id: ""
	I0229 02:17:37.529187  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.529199  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:37.529221  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:37.529238  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.594611  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:37.594642  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:37.612946  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:37.612980  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:37.697527  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:37.697552  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:37.697568  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:37.737130  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:37.737165  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.285260  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:40.302884  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:40.302962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:40.346431  360776 cri.go:89] found id: ""
	I0229 02:17:40.346463  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.346474  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:40.346481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:40.346547  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:40.403100  360776 cri.go:89] found id: ""
	I0229 02:17:40.403132  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.403147  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:40.403154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:40.403223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:40.466390  360776 cri.go:89] found id: ""
	I0229 02:17:40.466424  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.466435  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:40.466444  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:40.466516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:40.509811  360776 cri.go:89] found id: ""
	I0229 02:17:40.509840  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.509851  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:40.509859  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:40.509918  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:40.546249  360776 cri.go:89] found id: ""
	I0229 02:17:40.546281  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.546294  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:40.546302  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:40.546366  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:40.584490  360776 cri.go:89] found id: ""
	I0229 02:17:40.584520  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.584532  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:40.584540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:40.584602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:40.628397  360776 cri.go:89] found id: ""
	I0229 02:17:40.628427  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.628439  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:40.628447  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:40.628508  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:40.675557  360776 cri.go:89] found id: ""
	I0229 02:17:40.675584  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.675593  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:40.675603  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:40.675616  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:40.762140  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:40.762167  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:40.762192  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:40.808405  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:40.808444  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.860511  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:40.860553  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:40.929977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:40.930013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:43.449607  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:43.466367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:43.466441  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:43.504826  360776 cri.go:89] found id: ""
	I0229 02:17:43.504861  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.504873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:43.504880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:43.504946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:43.548641  360776 cri.go:89] found id: ""
	I0229 02:17:43.548682  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.548693  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:43.548701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:43.548760  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:43.591044  360776 cri.go:89] found id: ""
	I0229 02:17:43.591075  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.591085  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:43.591092  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:43.591152  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:43.639237  360776 cri.go:89] found id: ""
	I0229 02:17:43.639261  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.639269  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:43.639275  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:43.639329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:43.677231  360776 cri.go:89] found id: ""
	I0229 02:17:43.677264  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.677277  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:43.677285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:43.677359  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:43.721264  360776 cri.go:89] found id: ""
	I0229 02:17:43.721295  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.721306  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:43.721314  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:43.721379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:43.757248  360776 cri.go:89] found id: ""
	I0229 02:17:43.757281  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.757293  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:43.757300  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:43.757365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:43.802304  360776 cri.go:89] found id: ""
	I0229 02:17:43.802332  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.802343  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:43.802359  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:43.802375  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:43.855921  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:43.855949  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:43.869586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:43.869623  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:43.945526  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:43.945562  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:43.945579  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:43.987179  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:43.987215  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:46.537504  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:46.556578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:46.556653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:46.603983  360776 cri.go:89] found id: ""
	I0229 02:17:46.604012  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.604025  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:46.604037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:46.604107  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:46.657708  360776 cri.go:89] found id: ""
	I0229 02:17:46.657736  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.657747  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:46.657754  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:46.657820  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:46.708795  360776 cri.go:89] found id: ""
	I0229 02:17:46.708830  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.708843  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:46.708852  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:46.708920  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:46.758013  360776 cri.go:89] found id: ""
	I0229 02:17:46.758043  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.758056  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:46.758064  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:46.758157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:46.813107  360776 cri.go:89] found id: ""
	I0229 02:17:46.813138  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.813149  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:46.813156  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:46.813219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:46.859040  360776 cri.go:89] found id: ""
	I0229 02:17:46.859070  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.859081  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:46.859089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:46.859154  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:46.905302  360776 cri.go:89] found id: ""
	I0229 02:17:46.905334  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.905346  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:46.905354  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:46.905416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:46.950465  360776 cri.go:89] found id: ""
	I0229 02:17:46.950491  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.950502  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:46.950515  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:46.950530  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:47.035016  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:47.035044  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:47.035062  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:47.074108  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:47.074140  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:47.122149  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:47.122183  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:47.187233  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:47.187283  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:49.708451  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:49.727327  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:49.727383  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:49.775679  360776 cri.go:89] found id: ""
	I0229 02:17:49.775712  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.775723  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:49.775732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:49.775795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:49.821348  360776 cri.go:89] found id: ""
	I0229 02:17:49.821378  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.821387  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:49.821393  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:49.821459  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:49.864148  360776 cri.go:89] found id: ""
	I0229 02:17:49.864173  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.864182  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:49.864188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:49.864281  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:49.904720  360776 cri.go:89] found id: ""
	I0229 02:17:49.904747  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.904756  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:49.904768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:49.904835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:49.941952  360776 cri.go:89] found id: ""
	I0229 02:17:49.941976  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.941985  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:49.941992  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:49.942050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:49.987518  360776 cri.go:89] found id: ""
	I0229 02:17:49.987549  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.987559  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:49.987566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:49.987642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:50.030662  360776 cri.go:89] found id: ""
	I0229 02:17:50.030691  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.030700  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:50.030708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:50.030768  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:50.075564  360776 cri.go:89] found id: ""
	I0229 02:17:50.075594  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.075605  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:50.075617  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:50.075634  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:50.144223  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:50.144261  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:50.190615  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:50.190649  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:50.209014  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:50.209041  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:50.291096  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:50.291121  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:50.291135  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:52.827936  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:52.844926  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:52.845027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:52.892302  360776 cri.go:89] found id: ""
	I0229 02:17:52.892336  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.892349  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:52.892357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:52.892417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:52.943564  360776 cri.go:89] found id: ""
	I0229 02:17:52.943597  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.943607  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:52.943615  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:52.943683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:52.990217  360776 cri.go:89] found id: ""
	I0229 02:17:52.990251  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.990269  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:52.990278  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:52.990347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:53.038508  360776 cri.go:89] found id: ""
	I0229 02:17:53.038542  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.038554  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:53.038562  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:53.038622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:53.082156  360776 cri.go:89] found id: ""
	I0229 02:17:53.082184  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.082197  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:53.082205  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:53.082287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:53.149247  360776 cri.go:89] found id: ""
	I0229 02:17:53.149284  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.149295  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:53.149304  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:53.149371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:53.201169  360776 cri.go:89] found id: ""
	I0229 02:17:53.201199  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.201211  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:53.201219  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:53.201286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:53.268458  360776 cri.go:89] found id: ""
	I0229 02:17:53.268493  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.268507  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:53.268521  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:53.268546  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:53.288661  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:53.288708  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:53.371251  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:53.371277  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:53.371295  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:53.415981  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:53.416033  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:53.464558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:53.464600  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.030905  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:56.046625  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:56.046709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:56.090035  360776 cri.go:89] found id: ""
	I0229 02:17:56.090066  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.090094  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:56.090103  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:56.090176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:56.158245  360776 cri.go:89] found id: ""
	I0229 02:17:56.158276  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.158289  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:56.158297  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:56.158378  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:56.203917  360776 cri.go:89] found id: ""
	I0229 02:17:56.203947  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.203959  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:56.203967  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:56.204037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:56.267950  360776 cri.go:89] found id: ""
	I0229 02:17:56.267978  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.267995  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:56.268003  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:56.268065  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:56.312936  360776 cri.go:89] found id: ""
	I0229 02:17:56.312967  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.312979  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:56.312987  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:56.313050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:56.357548  360776 cri.go:89] found id: ""
	I0229 02:17:56.357584  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.357596  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:56.357605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:56.357674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:56.401842  360776 cri.go:89] found id: ""
	I0229 02:17:56.401876  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.401890  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:56.401898  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:56.401965  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:56.448506  360776 cri.go:89] found id: ""
	I0229 02:17:56.448538  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.448549  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:56.448562  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:56.448578  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.498783  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:56.498821  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:56.516722  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:56.516768  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:56.601770  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:56.601797  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:56.601815  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:56.642969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:56.643010  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:59.194448  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:59.212378  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:59.212455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:59.272835  360776 cri.go:89] found id: ""
	I0229 02:17:59.272864  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.272873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:59.272879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:59.272945  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:59.326044  360776 cri.go:89] found id: ""
	I0229 02:17:59.326097  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.326110  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:59.326119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:59.326195  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:59.375112  360776 cri.go:89] found id: ""
	I0229 02:17:59.375147  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.375158  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:59.375165  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:59.375231  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:59.423465  360776 cri.go:89] found id: ""
	I0229 02:17:59.423489  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.423498  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:59.423504  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:59.423564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:59.464386  360776 cri.go:89] found id: ""
	I0229 02:17:59.464416  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.464427  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:59.464433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:59.464493  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:59.507714  360776 cri.go:89] found id: ""
	I0229 02:17:59.507746  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.507759  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:59.507768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:59.507836  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:59.563729  360776 cri.go:89] found id: ""
	I0229 02:17:59.563761  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.563773  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:59.563781  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:59.563869  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:59.623366  360776 cri.go:89] found id: ""
	I0229 02:17:59.623392  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.623404  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:59.623417  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:59.623432  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:59.700723  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:59.700783  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:59.722858  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:59.722904  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:59.830864  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:59.830892  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:59.830908  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:59.881944  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:59.881996  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:02.462408  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:02.485957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:02.486017  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:02.540769  360776 cri.go:89] found id: ""
	I0229 02:18:02.540803  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.540814  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:02.540834  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:02.540902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:02.584488  360776 cri.go:89] found id: ""
	I0229 02:18:02.584514  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.584525  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:02.584532  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:02.584601  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:02.644908  360776 cri.go:89] found id: ""
	I0229 02:18:02.644943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.644956  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:02.644963  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:02.645031  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:02.702464  360776 cri.go:89] found id: ""
	I0229 02:18:02.702498  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.702510  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:02.702519  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:02.702587  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:02.754980  360776 cri.go:89] found id: ""
	I0229 02:18:02.755008  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.755020  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:02.755029  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:02.755101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:02.807863  360776 cri.go:89] found id: ""
	I0229 02:18:02.807890  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.807901  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:02.807908  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:02.807964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:02.850910  360776 cri.go:89] found id: ""
	I0229 02:18:02.850943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.850956  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:02.850964  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:02.851034  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:02.895792  360776 cri.go:89] found id: ""
	I0229 02:18:02.895832  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.895844  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:02.895857  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:02.895874  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:02.951353  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:02.951399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:02.970262  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:02.970303  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:03.055141  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:03.055165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:03.055182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:03.091751  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:03.091791  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:05.646070  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:05.663225  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:05.663301  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:05.712565  360776 cri.go:89] found id: ""
	I0229 02:18:05.712604  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.712623  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:05.712632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:05.712697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:05.761656  360776 cri.go:89] found id: ""
	I0229 02:18:05.761685  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.761699  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:05.761715  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:05.761780  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:05.805264  360776 cri.go:89] found id: ""
	I0229 02:18:05.805299  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.805310  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:05.805318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:05.805382  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:05.853483  360776 cri.go:89] found id: ""
	I0229 02:18:05.853555  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.853569  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:05.853578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:05.853653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:05.894561  360776 cri.go:89] found id: ""
	I0229 02:18:05.894589  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.894608  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:05.894616  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:05.894680  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:05.937784  360776 cri.go:89] found id: ""
	I0229 02:18:05.937816  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.937825  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:05.937832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:05.937900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:05.982000  360776 cri.go:89] found id: ""
	I0229 02:18:05.982028  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.982039  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:05.982046  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:05.982136  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:06.025395  360776 cri.go:89] found id: ""
	I0229 02:18:06.025430  360776 logs.go:276] 0 containers: []
	W0229 02:18:06.025443  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:06.025455  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:06.025470  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:06.078175  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:06.078221  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:06.106042  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:06.106097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:06.233485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:06.233506  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:06.233522  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:06.273517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:06.273557  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:08.827599  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:08.845166  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:08.845270  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:08.891258  360776 cri.go:89] found id: ""
	I0229 02:18:08.891291  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.891303  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:08.891311  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:08.891381  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:08.936833  360776 cri.go:89] found id: ""
	I0229 02:18:08.936868  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.936879  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:08.936888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:08.936962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:08.979759  360776 cri.go:89] found id: ""
	I0229 02:18:08.979788  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.979800  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:08.979812  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:08.979878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:09.023686  360776 cri.go:89] found id: ""
	I0229 02:18:09.023722  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.023734  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:09.023744  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:09.023817  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:09.068374  360776 cri.go:89] found id: ""
	I0229 02:18:09.068413  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.068426  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:09.068434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:09.068502  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:09.147948  360776 cri.go:89] found id: ""
	I0229 02:18:09.147976  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.147985  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:09.147991  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:09.148043  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:09.202491  360776 cri.go:89] found id: ""
	I0229 02:18:09.202522  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.202534  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:09.202542  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:09.202605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:09.248957  360776 cri.go:89] found id: ""
	I0229 02:18:09.248992  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.249005  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:09.249018  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:09.249038  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:09.318433  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:09.318476  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:09.335205  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:09.335240  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:09.417917  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:09.417952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:09.417969  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:09.464739  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:09.464779  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:12.017825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:12.033452  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:12.033518  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:12.082587  360776 cri.go:89] found id: ""
	I0229 02:18:12.082621  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.082634  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:12.082642  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:12.082714  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:12.132662  360776 cri.go:89] found id: ""
	I0229 02:18:12.132696  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.132717  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:12.132725  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:12.132795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:12.204316  360776 cri.go:89] found id: ""
	I0229 02:18:12.204343  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.204351  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:12.204357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:12.204417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:12.255146  360776 cri.go:89] found id: ""
	I0229 02:18:12.255178  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.255190  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:12.255198  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:12.255265  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:12.299280  360776 cri.go:89] found id: ""
	I0229 02:18:12.299314  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.299328  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:12.299337  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:12.299410  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:12.340621  360776 cri.go:89] found id: ""
	I0229 02:18:12.340646  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.340658  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:12.340667  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:12.340722  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:12.391888  360776 cri.go:89] found id: ""
	I0229 02:18:12.391926  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.391938  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:12.391945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:12.392010  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:12.440219  360776 cri.go:89] found id: ""
	I0229 02:18:12.440250  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.440263  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:12.440276  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:12.440290  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:12.495586  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:12.495621  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:12.513608  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:12.513653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:12.587894  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:12.587929  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:12.587956  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:12.625496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:12.625533  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:15.187090  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:15.206990  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:15.207074  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:15.261493  360776 cri.go:89] found id: ""
	I0229 02:18:15.261522  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.261535  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:15.261543  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:15.261620  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:15.302408  360776 cri.go:89] found id: ""
	I0229 02:18:15.302437  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.302449  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:15.302457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:15.302524  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:15.340553  360776 cri.go:89] found id: ""
	I0229 02:18:15.340580  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.340590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:15.340598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:15.340661  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:15.383659  360776 cri.go:89] found id: ""
	I0229 02:18:15.383688  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.383699  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:15.383708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:15.383777  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:15.433164  360776 cri.go:89] found id: ""
	I0229 02:18:15.433200  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.433212  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:15.433220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:15.433293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:15.479950  360776 cri.go:89] found id: ""
	I0229 02:18:15.479993  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.480006  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:15.480014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:15.480078  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:15.519601  360776 cri.go:89] found id: ""
	I0229 02:18:15.519628  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.519637  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:15.519644  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:15.519707  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:15.564564  360776 cri.go:89] found id: ""
	I0229 02:18:15.564598  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.564610  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:15.564624  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:15.564643  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:15.615855  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:15.615894  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:15.632464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:15.632505  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:15.713177  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:15.713198  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:15.713214  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:15.749296  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:15.749326  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:18.299689  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:18.315449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:18.315523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:18.357310  360776 cri.go:89] found id: ""
	I0229 02:18:18.357347  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.357360  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:18.357369  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:18.357427  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:18.410178  360776 cri.go:89] found id: ""
	I0229 02:18:18.410212  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.410224  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:18.410232  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:18.410300  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:18.452273  360776 cri.go:89] found id: ""
	I0229 02:18:18.452303  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.452315  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:18.452330  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:18.452398  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:18.493134  360776 cri.go:89] found id: ""
	I0229 02:18:18.493161  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.493170  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:18.493176  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:18.493247  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:18.530812  360776 cri.go:89] found id: ""
	I0229 02:18:18.530843  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.530855  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:18.530864  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:18.530931  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:18.572183  360776 cri.go:89] found id: ""
	I0229 02:18:18.572216  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.572231  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:18.572240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:18.572314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:18.612117  360776 cri.go:89] found id: ""
	I0229 02:18:18.612148  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.612160  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:18.612169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:18.612230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:18.653827  360776 cri.go:89] found id: ""
	I0229 02:18:18.653855  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.653866  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:18.653879  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:18.653898  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:18.688058  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:18.688094  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:18.735458  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:18.735493  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:18.795735  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:18.795780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:18.816207  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:18.816239  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:18.928414  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:21.429284  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:21.445010  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:21.445084  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:21.484084  360776 cri.go:89] found id: ""
	I0229 02:18:21.484128  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.484141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:21.484159  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:21.484223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:21.536516  360776 cri.go:89] found id: ""
	I0229 02:18:21.536550  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.536563  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:21.536571  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:21.536636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:21.588732  360776 cri.go:89] found id: ""
	I0229 02:18:21.588761  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.588773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:21.588782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:21.588843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:21.644434  360776 cri.go:89] found id: ""
	I0229 02:18:21.644470  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.644483  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:21.644491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:21.644560  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:21.685496  360776 cri.go:89] found id: ""
	I0229 02:18:21.685528  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.685540  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:21.685548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:21.685615  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:21.741146  360776 cri.go:89] found id: ""
	I0229 02:18:21.741176  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.741188  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:21.741196  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:21.741287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:21.790924  360776 cri.go:89] found id: ""
	I0229 02:18:21.790953  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.790964  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:21.790972  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:21.791040  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:21.843079  360776 cri.go:89] found id: ""
	I0229 02:18:21.843107  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.843118  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:21.843131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:21.843155  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:21.917006  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:21.917035  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:21.987268  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:21.987313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:22.009660  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:22.009699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:22.101976  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:22.102000  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:22.102017  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:24.648787  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:24.663511  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:24.663574  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:24.702299  360776 cri.go:89] found id: ""
	I0229 02:18:24.702329  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.702342  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:24.702349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:24.702414  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:24.741664  360776 cri.go:89] found id: ""
	I0229 02:18:24.741696  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.741708  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:24.741720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:24.741782  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:24.809755  360776 cri.go:89] found id: ""
	I0229 02:18:24.809788  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.809799  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:24.809807  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:24.809867  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:24.850308  360776 cri.go:89] found id: ""
	I0229 02:18:24.850335  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.850344  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:24.850351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:24.850408  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:24.903507  360776 cri.go:89] found id: ""
	I0229 02:18:24.903539  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.903551  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:24.903559  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:24.903624  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:24.952996  360776 cri.go:89] found id: ""
	I0229 02:18:24.953026  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.953039  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:24.953048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:24.953119  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:24.999301  360776 cri.go:89] found id: ""
	I0229 02:18:24.999334  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.999347  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:24.999355  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:24.999418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:25.044310  360776 cri.go:89] found id: ""
	I0229 02:18:25.044350  360776 logs.go:276] 0 containers: []
	W0229 02:18:25.044362  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:25.044375  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:25.044391  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:25.091374  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:25.091407  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:25.109080  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:25.109118  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:25.186611  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:25.186639  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:25.186663  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:25.226779  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:25.226825  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:27.775896  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:27.789596  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:27.789662  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:27.834159  360776 cri.go:89] found id: ""
	I0229 02:18:27.834186  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.834198  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:27.834207  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:27.834278  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:27.887355  360776 cri.go:89] found id: ""
	I0229 02:18:27.887386  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.887398  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:27.887407  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:27.887481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:27.927671  360776 cri.go:89] found id: ""
	I0229 02:18:27.927710  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.927724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:27.927740  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:27.927819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:27.983438  360776 cri.go:89] found id: ""
	I0229 02:18:27.983471  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.983484  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:27.983493  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:27.983562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:28.026112  360776 cri.go:89] found id: ""
	I0229 02:18:28.026143  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.026156  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:28.026238  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:28.026310  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:28.069085  360776 cri.go:89] found id: ""
	I0229 02:18:28.069118  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.069130  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:28.069138  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:28.069285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:28.115010  360776 cri.go:89] found id: ""
	I0229 02:18:28.115037  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.115046  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:28.115051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:28.115113  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:28.157726  360776 cri.go:89] found id: ""
	I0229 02:18:28.157756  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.157769  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:28.157783  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:28.157800  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:28.218148  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:28.218196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:28.238106  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:28.238142  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:28.328947  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:28.328971  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:28.328988  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:28.364795  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:28.364831  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:30.914422  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:30.929248  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:30.929334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:30.983535  360776 cri.go:89] found id: ""
	I0229 02:18:30.983566  360776 logs.go:276] 0 containers: []
	W0229 02:18:30.983577  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:30.983585  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:30.983644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:31.037809  360776 cri.go:89] found id: ""
	I0229 02:18:31.037842  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.037853  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:31.037862  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:31.037933  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:31.089101  360776 cri.go:89] found id: ""
	I0229 02:18:31.089134  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.089146  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:31.089154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:31.089219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:31.139413  360776 cri.go:89] found id: ""
	I0229 02:18:31.139444  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.139456  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:31.139463  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:31.139542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:31.177185  360776 cri.go:89] found id: ""
	I0229 02:18:31.177214  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.177223  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:31.177229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:31.177295  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:31.221339  360776 cri.go:89] found id: ""
	I0229 02:18:31.221374  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.221387  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:31.221395  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:31.221461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:31.261770  360776 cri.go:89] found id: ""
	I0229 02:18:31.261803  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.261815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:31.261824  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:31.261895  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:31.309126  360776 cri.go:89] found id: ""
	I0229 02:18:31.309157  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.309168  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:31.309179  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:31.309193  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:31.362509  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:31.362552  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:31.379334  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:31.379383  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:31.471339  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:31.471359  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:31.471372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:31.511126  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:31.511172  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:34.063372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:34.077222  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:34.077297  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:34.116752  360776 cri.go:89] found id: ""
	I0229 02:18:34.116793  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.116806  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:34.116815  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:34.116880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:34.157658  360776 cri.go:89] found id: ""
	I0229 02:18:34.157689  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.157700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:34.157708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:34.157779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:34.199922  360776 cri.go:89] found id: ""
	I0229 02:18:34.199957  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.199969  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:34.199977  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:34.200044  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:34.242474  360776 cri.go:89] found id: ""
	I0229 02:18:34.242505  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.242517  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:34.242526  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:34.242585  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:34.289308  360776 cri.go:89] found id: ""
	I0229 02:18:34.289338  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.289360  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:34.289367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:34.289431  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:34.335947  360776 cri.go:89] found id: ""
	I0229 02:18:34.335985  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.335997  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:34.336005  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:34.336073  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:34.377048  360776 cri.go:89] found id: ""
	I0229 02:18:34.377085  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.377097  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:34.377107  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:34.377181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:34.424208  360776 cri.go:89] found id: ""
	I0229 02:18:34.424238  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.424250  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:34.424270  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:34.424288  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:34.500223  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:34.500245  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:34.500263  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:34.534652  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:34.534688  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:34.593369  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:34.593405  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:34.646940  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:34.646982  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.169523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:37.184168  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:37.184245  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:37.232979  360776 cri.go:89] found id: ""
	I0229 02:18:37.233015  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.233026  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:37.233037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:37.233110  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:37.275771  360776 cri.go:89] found id: ""
	I0229 02:18:37.275796  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.275805  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:37.275811  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:37.275877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:37.322421  360776 cri.go:89] found id: ""
	I0229 02:18:37.322451  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.322460  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:37.322466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:37.322525  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:37.366974  360776 cri.go:89] found id: ""
	I0229 02:18:37.367001  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.367011  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:37.367020  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:37.367080  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:37.408780  360776 cri.go:89] found id: ""
	I0229 02:18:37.408811  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.408822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:37.408828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:37.408880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:37.447402  360776 cri.go:89] found id: ""
	I0229 02:18:37.447429  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.447441  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:37.447449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:37.447511  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:37.486454  360776 cri.go:89] found id: ""
	I0229 02:18:37.486491  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.486502  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:37.486510  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:37.486579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:37.531484  360776 cri.go:89] found id: ""
	I0229 02:18:37.531517  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.531533  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:37.531545  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:37.531562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:37.581274  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:37.581312  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.601745  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:37.601777  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:37.707773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:37.707801  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:37.707818  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:37.740658  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:37.740698  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.296427  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:40.311365  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:40.311439  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:40.354647  360776 cri.go:89] found id: ""
	I0229 02:18:40.354675  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.354693  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:40.354701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:40.354769  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:40.400490  360776 cri.go:89] found id: ""
	I0229 02:18:40.400520  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.400529  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:40.400535  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:40.400602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:40.442029  360776 cri.go:89] found id: ""
	I0229 02:18:40.442051  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.442060  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:40.442065  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:40.442169  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:40.481183  360776 cri.go:89] found id: ""
	I0229 02:18:40.481216  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.481228  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:40.481237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:40.481316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:40.523076  360776 cri.go:89] found id: ""
	I0229 02:18:40.523104  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.523113  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:40.523118  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:40.523209  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:40.561787  360776 cri.go:89] found id: ""
	I0229 02:18:40.561817  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.561826  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:40.561832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:40.561908  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:40.598621  360776 cri.go:89] found id: ""
	I0229 02:18:40.598647  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.598655  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:40.598662  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:40.598710  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:40.637701  360776 cri.go:89] found id: ""
	I0229 02:18:40.637734  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.637745  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:40.637758  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:40.637775  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.685317  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:40.685351  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:40.735348  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:40.735386  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:40.751373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:40.751434  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:40.822604  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:40.822624  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:40.822637  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:43.357769  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:43.373119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:43.373186  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:43.409160  360776 cri.go:89] found id: ""
	I0229 02:18:43.409181  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.409189  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:43.409195  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:43.409238  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:43.447193  360776 cri.go:89] found id: ""
	I0229 02:18:43.447222  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.447231  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:43.447237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:43.447296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:43.487906  360776 cri.go:89] found id: ""
	I0229 02:18:43.487934  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.487942  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:43.487949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:43.488008  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:43.527968  360776 cri.go:89] found id: ""
	I0229 02:18:43.528002  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.528016  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:43.528024  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:43.528100  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:43.573298  360776 cri.go:89] found id: ""
	I0229 02:18:43.573333  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.573344  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:43.573351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:43.573461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:43.630816  360776 cri.go:89] found id: ""
	I0229 02:18:43.630856  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.630867  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:43.630881  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:43.630954  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:43.701516  360776 cri.go:89] found id: ""
	I0229 02:18:43.701547  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.701559  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:43.701567  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:43.701636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:43.747444  360776 cri.go:89] found id: ""
	I0229 02:18:43.747474  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.747484  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:43.747494  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:43.747510  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:43.828216  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:43.828246  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:43.828270  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:43.874647  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:43.874684  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:43.937776  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:43.937808  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:43.989210  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:43.989250  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.506056  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:46.519717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:46.519784  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:46.585095  360776 cri.go:89] found id: ""
	I0229 02:18:46.585128  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.585141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:46.585149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:46.585212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:46.638520  360776 cri.go:89] found id: ""
	I0229 02:18:46.638553  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.638565  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:46.638572  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:46.638637  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:46.691413  360776 cri.go:89] found id: ""
	I0229 02:18:46.691446  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.691458  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:46.691466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:46.691532  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:46.735054  360776 cri.go:89] found id: ""
	I0229 02:18:46.735083  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.735092  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:46.735098  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:46.735159  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:46.772486  360776 cri.go:89] found id: ""
	I0229 02:18:46.772531  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.772543  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:46.772551  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:46.772610  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:46.815466  360776 cri.go:89] found id: ""
	I0229 02:18:46.815491  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.815499  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:46.815505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:46.815553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:46.853168  360776 cri.go:89] found id: ""
	I0229 02:18:46.853199  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.853212  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:46.853220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:46.853299  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:46.894320  360776 cri.go:89] found id: ""
	I0229 02:18:46.894353  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.894365  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:46.894378  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:46.894394  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:46.944593  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:46.944631  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.960405  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:46.960433  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:47.029929  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:47.029960  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:47.029977  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:47.065292  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:47.065327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:49.620521  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:49.636247  360776 kubeadm.go:640] restartCluster took 4m12.880265518s
	W0229 02:18:49.636335  360776 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:18:49.636372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:18:50.114412  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:50.130257  360776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:18:50.141556  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:18:50.152882  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:18:50.152929  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:18:50.213815  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:18:50.213922  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:18:50.341927  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:18:50.342103  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:18:50.342249  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:18:50.577201  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:18:50.578563  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:18:50.587158  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:18:50.712207  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:18:50.714032  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:18:50.714149  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:18:50.716103  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:18:50.717503  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:18:50.718203  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:18:50.719194  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:18:50.719913  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:18:50.721364  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:18:50.722412  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:18:50.723087  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:18:50.723663  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:18:50.723813  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:18:50.724029  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:18:51.003432  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:18:51.145978  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:18:51.230808  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:18:51.340889  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:18:51.341726  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:18:51.343443  360776 out.go:204]   - Booting up control plane ...
	I0229 02:18:51.343564  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:18:51.347723  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:18:51.348592  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:18:51.349514  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:18:51.352720  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:19:31.352923  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:19:31.353370  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:31.353570  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:36.354842  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:36.355179  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:46.356431  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:46.356735  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:06.357825  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:06.358110  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:46.359040  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:46.359315  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:46.359346  360776 kubeadm.go:322] 
	I0229 02:20:46.359398  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:20:46.359458  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:20:46.359467  360776 kubeadm.go:322] 
	I0229 02:20:46.359511  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:20:46.359565  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:20:46.359711  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:20:46.359720  360776 kubeadm.go:322] 
	I0229 02:20:46.359823  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:20:46.359867  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:20:46.359894  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:20:46.359900  360776 kubeadm.go:322] 
	I0229 02:20:46.360005  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:20:46.360128  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:20:46.360236  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:20:46.360310  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:20:46.360381  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:20:46.360410  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:20:46.361502  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:20:46.361603  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:20:46.361688  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:20:46.361890  360776 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:20:46.361946  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:20:46.833083  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:46.850670  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:20:46.863291  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:20:46.863352  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:20:46.929466  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:20:46.929532  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:20:47.064941  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:20:47.065277  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:20:47.065515  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:20:47.284721  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:20:47.285859  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:20:47.295028  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:20:47.429614  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:20:47.431229  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:20:47.431315  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:20:47.431389  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:20:47.431487  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:20:47.431603  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:20:47.431719  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:20:47.431796  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:20:47.431890  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:20:47.431974  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:20:47.432093  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:20:47.432212  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:20:47.432275  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:20:47.432366  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:20:47.946255  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:20:48.258186  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:20:48.398982  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:20:48.545961  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:20:48.546829  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:20:48.548500  360776 out.go:204]   - Booting up control plane ...
	I0229 02:20:48.548614  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:20:48.552604  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:20:48.553548  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:20:48.554256  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:20:48.558508  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:21:28.560199  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:21:28.560645  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:28.560944  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:33.561853  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:33.562057  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:43.562844  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:43.563063  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:03.563980  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:03.564274  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:43.566143  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:43.566419  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:43.566432  360776 kubeadm.go:322] 
	I0229 02:22:43.566494  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:22:43.566562  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:22:43.566573  360776 kubeadm.go:322] 
	I0229 02:22:43.566621  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:22:43.566669  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:22:43.566789  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:22:43.566798  360776 kubeadm.go:322] 
	I0229 02:22:43.566954  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:22:43.567000  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:22:43.567049  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:22:43.567060  360776 kubeadm.go:322] 
	I0229 02:22:43.567282  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:22:43.567417  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:22:43.567521  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:22:43.567592  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:22:43.567684  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:22:43.567736  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:22:43.568136  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:22:43.568244  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:22:43.568368  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:22:43.568439  360776 kubeadm.go:406] StartCluster complete in 8m6.863500244s
	I0229 02:22:43.568498  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:22:43.568644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:22:43.619887  360776 cri.go:89] found id: ""
	I0229 02:22:43.619917  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.619926  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:22:43.619932  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:22:43.619996  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:22:43.658073  360776 cri.go:89] found id: ""
	I0229 02:22:43.658110  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.658120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:22:43.658127  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:22:43.658197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:22:43.697445  360776 cri.go:89] found id: ""
	I0229 02:22:43.697476  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.697489  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:22:43.697495  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:22:43.697561  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:22:43.736241  360776 cri.go:89] found id: ""
	I0229 02:22:43.736270  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.736278  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:22:43.736285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:22:43.736345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:22:43.775185  360776 cri.go:89] found id: ""
	I0229 02:22:43.775212  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.775221  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:22:43.775227  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:22:43.775292  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:22:43.815309  360776 cri.go:89] found id: ""
	I0229 02:22:43.815338  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.815347  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:22:43.815353  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:22:43.815436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:22:43.860248  360776 cri.go:89] found id: ""
	I0229 02:22:43.860284  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.860296  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:22:43.860305  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:22:43.860375  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:22:43.918615  360776 cri.go:89] found id: ""
	I0229 02:22:43.918644  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.918656  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:22:43.918671  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:22:43.918687  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:22:43.966006  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:22:43.966045  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:22:43.981843  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:22:43.981875  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:22:44.056838  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:22:44.056870  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:22:44.056887  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:22:44.090353  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:22:44.090384  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:22:44.143169  360776 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:22:44.143235  360776 out.go:239] * 
	* 
	W0229 02:22:44.143336  360776 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.143366  360776 out.go:239] * 
	* 
	W0229 02:22:44.144361  360776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:22:44.147267  360776 out.go:177] 
	W0229 02:22:44.148417  360776 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.148458  360776 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:22:44.148476  360776 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:22:44.149710  360776 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-254968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (263.85944ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-254968 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-254968 logs -n 25: (1.121246101s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p newest-cni-268307                                   | newest-cni-268307            | jenkins | v1.32.0 | 29 Feb 24 02:11 UTC | 29 Feb 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-276073 | jenkins | v1.32.0 | 29 Feb 24 02:11 UTC | 29 Feb 24 02:11 UTC |
	|         | disable-driver-mounts-276073                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:11 UTC | 29 Feb 24 02:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-254968        | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-907398                  | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-254367       | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-665766            | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-254968                              | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-254968             | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-254968                              | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-665766                 | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | no-preload-907398 image list                           | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	| delete  | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	| image   | default-k8s-diff-port-254367                           | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:15:00
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:15:00.195513  361093 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:15:00.195780  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:15:00.195791  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:15:00.195798  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:15:00.196014  361093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:15:00.196538  361093 out.go:298] Setting JSON to false
	I0229 02:15:00.197510  361093 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7044,"bootTime":1709165856,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:15:00.197578  361093 start.go:139] virtualization: kvm guest
	I0229 02:15:00.199670  361093 out.go:177] * [embed-certs-665766] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:15:00.201014  361093 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:15:00.202314  361093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:15:00.201016  361093 notify.go:220] Checking for updates...
	I0229 02:15:00.204683  361093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:15:00.205981  361093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:15:00.207104  361093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:15:00.208151  361093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:15:00.209800  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:15:00.210427  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.210478  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.226129  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35133
	I0229 02:15:00.226543  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.227211  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.227260  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.227606  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.227858  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.228153  361093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:15:00.228600  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.228648  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.244111  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0229 02:15:00.244523  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.244927  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.244955  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.245291  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.245488  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.279319  361093 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:15:00.280565  361093 start.go:299] selected driver: kvm2
	I0229 02:15:00.280576  361093 start.go:903] validating driver "kvm2" against &{Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:00.280689  361093 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:15:00.281579  361093 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:15:00.281718  361093 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:15:00.296404  361093 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:15:00.296764  361093 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:15:00.296834  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:00.296847  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:00.296856  361093 start_flags.go:323] config:
	{Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:00.296993  361093 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:15:00.298652  361093 out.go:177] * Starting control plane node embed-certs-665766 in cluster embed-certs-665766
	I0229 02:15:00.299785  361093 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:15:00.299837  361093 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 02:15:00.299848  361093 cache.go:56] Caching tarball of preloaded images
	I0229 02:15:00.299924  361093 preload.go:174] Found /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:15:00.299936  361093 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 02:15:00.300040  361093 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/config.json ...
	I0229 02:15:00.300211  361093 start.go:365] acquiring machines lock for embed-certs-665766: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:15:00.300253  361093 start.go:369] acquired machines lock for "embed-certs-665766" in 22.524µs
	I0229 02:15:00.300268  361093 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:15:00.300281  361093 fix.go:54] fixHost starting: 
	I0229 02:15:00.300618  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.300658  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.315579  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0229 02:15:00.315993  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.316460  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.316481  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.316776  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.317012  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.317164  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:15:00.318770  361093 fix.go:102] recreateIfNeeded on embed-certs-665766: state=Stopped err=<nil>
	I0229 02:15:00.318802  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	W0229 02:15:00.318984  361093 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:15:00.320597  361093 out.go:177] * Restarting existing kvm2 VM for "embed-certs-665766" ...
	I0229 02:14:57.672798  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.172654  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.673282  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.173312  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.672878  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.172953  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.673170  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.173005  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.672595  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:02.172649  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.736314  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:00.738234  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:02.738646  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:14:59.777395  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:01.781443  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:00.321860  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Start
	I0229 02:15:00.322009  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring networks are active...
	I0229 02:15:00.322780  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring network default is active
	I0229 02:15:00.323102  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring network mk-embed-certs-665766 is active
	I0229 02:15:00.323540  361093 main.go:141] libmachine: (embed-certs-665766) Getting domain xml...
	I0229 02:15:00.324206  361093 main.go:141] libmachine: (embed-certs-665766) Creating domain...
	I0229 02:15:01.564400  361093 main.go:141] libmachine: (embed-certs-665766) Waiting to get IP...
	I0229 02:15:01.565163  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:01.565606  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:01.565665  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:01.565569  361128 retry.go:31] will retry after 283.275743ms: waiting for machine to come up
	I0229 02:15:01.850148  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:01.850742  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:01.850796  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:01.850687  361128 retry.go:31] will retry after 252.966549ms: waiting for machine to come up
	I0229 02:15:02.105129  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:02.105699  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:02.105732  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:02.105660  361128 retry.go:31] will retry after 470.28664ms: waiting for machine to come up
	I0229 02:15:02.577216  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:02.577778  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:02.577807  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:02.577721  361128 retry.go:31] will retry after 527.191742ms: waiting for machine to come up
	I0229 02:15:03.106209  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:03.106698  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:03.106725  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:03.106650  361128 retry.go:31] will retry after 472.107889ms: waiting for machine to come up
	I0229 02:15:03.580375  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:03.580945  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:03.580972  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:03.580890  361128 retry.go:31] will retry after 683.066759ms: waiting for machine to come up
	I0229 02:15:04.265769  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:04.266340  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:04.266370  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:04.266282  361128 retry.go:31] will retry after 1.031418978s: waiting for machine to come up
	I0229 02:15:02.673169  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.173251  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.672864  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.173580  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.173278  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.672747  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.173514  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.672853  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:07.173295  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.238704  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:07.736326  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:04.278766  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:06.779170  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:05.299213  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:05.299740  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:05.299773  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:05.299673  361128 retry.go:31] will retry after 1.037425014s: waiting for machine to come up
	I0229 02:15:06.339189  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:06.339656  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:06.339688  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:06.339607  361128 retry.go:31] will retry after 1.829261156s: waiting for machine to come up
	I0229 02:15:08.171250  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:08.171913  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:08.171940  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:08.171868  361128 retry.go:31] will retry after 1.840049442s: waiting for machine to come up
	I0229 02:15:10.015035  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:10.015601  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:10.015624  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:10.015545  361128 retry.go:31] will retry after 2.792261425s: waiting for machine to come up
	I0229 02:15:07.673496  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.173235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.672970  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.173203  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.672669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.172971  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.673523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.172857  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.672596  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:12.173541  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.236392  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:12.241873  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:09.277845  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:11.280119  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:13.777454  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:12.811472  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:12.812070  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:12.812092  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:12.812028  361128 retry.go:31] will retry after 3.422816729s: waiting for machine to come up
	I0229 02:15:12.673205  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.173523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.672774  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.173115  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.673616  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.172831  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.673160  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.673287  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:17.172640  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.243740  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:16.736133  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:15.778484  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:17.778658  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:16.236374  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:16.236943  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:16.236973  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:16.236905  361128 retry.go:31] will retry after 3.865566322s: waiting for machine to come up
	I0229 02:15:20.106964  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.107455  361093 main.go:141] libmachine: (embed-certs-665766) Found IP for machine: 192.168.39.252
	I0229 02:15:20.107480  361093 main.go:141] libmachine: (embed-certs-665766) Reserving static IP address...
	I0229 02:15:20.107494  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has current primary IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.107964  361093 main.go:141] libmachine: (embed-certs-665766) Reserved static IP address: 192.168.39.252
	I0229 02:15:20.107994  361093 main.go:141] libmachine: (embed-certs-665766) Waiting for SSH to be available...
	I0229 02:15:20.108041  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "embed-certs-665766", mac: "52:54:00:0f:ed:e3", ip: "192.168.39.252"} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.108074  361093 main.go:141] libmachine: (embed-certs-665766) DBG | skip adding static IP to network mk-embed-certs-665766 - found existing host DHCP lease matching {name: "embed-certs-665766", mac: "52:54:00:0f:ed:e3", ip: "192.168.39.252"}
	I0229 02:15:20.108095  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Getting to WaitForSSH function...
	I0229 02:15:20.110175  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.110485  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.110511  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.110667  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Using SSH client type: external
	I0229 02:15:20.110696  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa (-rw-------)
	I0229 02:15:20.110761  361093 main.go:141] libmachine: (embed-certs-665766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:15:20.110788  361093 main.go:141] libmachine: (embed-certs-665766) DBG | About to run SSH command:
	I0229 02:15:20.110807  361093 main.go:141] libmachine: (embed-certs-665766) DBG | exit 0
	I0229 02:15:17.672587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.173318  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.673512  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.673611  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.172605  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.173587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.673298  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:22.172625  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.238381  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:21.736665  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:20.246600  361093 main.go:141] libmachine: (embed-certs-665766) DBG | SSH cmd err, output: <nil>: 
	I0229 02:15:20.247008  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetConfigRaw
	I0229 02:15:20.247628  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:20.250151  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.250492  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.250524  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.250769  361093 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/config.json ...
	I0229 02:15:20.251020  361093 machine.go:88] provisioning docker machine ...
	I0229 02:15:20.251044  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:20.251255  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.251442  361093 buildroot.go:166] provisioning hostname "embed-certs-665766"
	I0229 02:15:20.251465  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.251607  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.253793  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.254144  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.254176  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.254345  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.254528  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.254701  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.254886  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.255075  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:20.255290  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:20.255302  361093 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-665766 && echo "embed-certs-665766" | sudo tee /etc/hostname
	I0229 02:15:20.387006  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-665766
	
	I0229 02:15:20.387037  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.389660  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.390034  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.390075  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.390263  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.390512  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.390720  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.390846  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.391013  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:20.391195  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:20.391212  361093 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-665766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-665766/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-665766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:15:20.517065  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:15:20.517117  361093 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:15:20.517171  361093 buildroot.go:174] setting up certificates
	I0229 02:15:20.517189  361093 provision.go:83] configureAuth start
	I0229 02:15:20.517207  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.517534  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:20.520639  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.521028  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.521062  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.521231  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.523702  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.524078  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.524128  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.524228  361093 provision.go:138] copyHostCerts
	I0229 02:15:20.524293  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:15:20.524319  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:15:20.524405  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:15:20.524527  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:15:20.524537  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:15:20.524583  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:15:20.524674  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:15:20.524684  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:15:20.524718  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:15:20.524803  361093 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-665766 san=[192.168.39.252 192.168.39.252 localhost 127.0.0.1 minikube embed-certs-665766]
	I0229 02:15:20.822225  361093 provision.go:172] copyRemoteCerts
	I0229 02:15:20.822298  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:15:20.822346  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.825396  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.825833  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.825863  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.826114  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.826349  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.826496  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.826626  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:20.915099  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:15:20.942985  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:15:20.974642  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:15:21.002039  361093 provision.go:86] duration metric: configureAuth took 484.832048ms
	I0229 02:15:21.002101  361093 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:15:21.002327  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:15:21.002341  361093 machine.go:91] provisioned docker machine in 751.30636ms
	I0229 02:15:21.002350  361093 start.go:300] post-start starting for "embed-certs-665766" (driver="kvm2")
	I0229 02:15:21.002361  361093 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:15:21.002433  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.002803  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:15:21.002843  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.005633  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.006105  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.006141  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.006336  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.006562  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.006784  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.006972  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.094951  361093 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:15:21.100607  361093 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:15:21.100637  361093 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:15:21.100736  361093 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:15:21.100864  361093 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:15:21.101000  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:15:21.113280  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:15:21.142831  361093 start.go:303] post-start completed in 140.464811ms
	I0229 02:15:21.142864  361093 fix.go:56] fixHost completed within 20.842581853s
	I0229 02:15:21.142977  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.145855  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.146221  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.146273  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.146427  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.146675  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.146826  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.146946  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.147137  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:21.147306  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:21.147316  361093 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:15:21.267552  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172921.247201349
	
	I0229 02:15:21.267579  361093 fix.go:206] guest clock: 1709172921.247201349
	I0229 02:15:21.267590  361093 fix.go:219] Guest: 2024-02-29 02:15:21.247201349 +0000 UTC Remote: 2024-02-29 02:15:21.142869918 +0000 UTC m=+21.001592109 (delta=104.331431ms)
	I0229 02:15:21.267644  361093 fix.go:190] guest clock delta is within tolerance: 104.331431ms
	I0229 02:15:21.267653  361093 start.go:83] releasing machines lock for "embed-certs-665766", held for 20.967392077s
	I0229 02:15:21.267681  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.267949  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:21.270730  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.271194  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.271223  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.271559  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272366  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272582  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272673  361093 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:15:21.272718  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.272844  361093 ssh_runner.go:195] Run: cat /version.json
	I0229 02:15:21.272867  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.276061  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276385  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276515  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.276563  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276647  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.276673  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276693  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.276843  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.276926  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.277031  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.277103  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.277160  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.277254  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.277316  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.380428  361093 ssh_runner.go:195] Run: systemctl --version
	I0229 02:15:21.387150  361093 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:15:21.393537  361093 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:15:21.393595  361093 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:15:21.411579  361093 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:15:21.411609  361093 start.go:475] detecting cgroup driver to use...
	I0229 02:15:21.411682  361093 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:15:21.442122  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:21.457974  361093 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:15:21.458041  361093 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:15:21.474421  361093 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:15:21.490462  361093 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:15:21.618342  361093 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:15:21.802579  361093 docker.go:233] disabling docker service ...
	I0229 02:15:21.802649  361093 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:15:21.818349  361093 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:15:21.832338  361093 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:15:21.975684  361093 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:15:22.118703  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:15:22.134525  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:22.155421  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:15:22.166809  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:15:22.180082  361093 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:15:22.180163  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:15:22.195414  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:22.206812  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:15:22.217930  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:22.229893  361093 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:15:22.244345  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:15:22.255766  361093 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:15:22.265968  361093 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:15:22.266042  361093 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:15:22.280500  361093 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:15:22.290749  361093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:22.447260  361093 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:15:22.489965  361093 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:15:22.490049  361093 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:15:22.495946  361093 retry.go:31] will retry after 681.640314ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:15:23.178613  361093 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:15:23.186465  361093 start.go:543] Will wait 60s for crictl version
	I0229 02:15:23.186531  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:15:23.191421  361093 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:15:23.240728  361093 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:15:23.240833  361093 ssh_runner.go:195] Run: containerd --version
	I0229 02:15:23.271700  361093 ssh_runner.go:195] Run: containerd --version
	I0229 02:15:23.311413  361093 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 02:15:20.278855  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:22.776938  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:23.312543  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:23.315197  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:23.315505  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:23.315541  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:23.315774  361093 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:15:23.321091  361093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:23.335366  361093 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:15:23.335482  361093 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:15:23.380351  361093 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:15:23.380391  361093 containerd.go:519] Images already preloaded, skipping extraction
	I0229 02:15:23.380462  361093 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:15:23.421267  361093 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:15:23.421295  361093 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:15:23.421374  361093 ssh_runner.go:195] Run: sudo crictl info
	I0229 02:15:23.460765  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:23.460802  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:23.460841  361093 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:15:23.460868  361093 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.252 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-665766 NodeName:embed-certs-665766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:15:23.461060  361093 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-665766"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.252
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:15:23.461154  361093 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-665766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:15:23.461223  361093 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:15:23.472810  361093 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:15:23.472873  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:15:23.483214  361093 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (392 bytes)
	I0229 02:15:23.502301  361093 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:15:23.522993  361093 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0229 02:15:23.543866  361093 ssh_runner.go:195] Run: grep 192.168.39.252	control-plane.minikube.internal$ /etc/hosts
	I0229 02:15:23.548448  361093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:23.561909  361093 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766 for IP: 192.168.39.252
	I0229 02:15:23.561962  361093 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:23.562164  361093 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 02:15:23.562207  361093 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 02:15:23.562316  361093 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/client.key
	I0229 02:15:23.562390  361093 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.key.ba3365be
	I0229 02:15:23.562442  361093 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.key
	I0229 02:15:23.562597  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 02:15:23.562642  361093 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 02:15:23.562657  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:15:23.562691  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:15:23.562725  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:15:23.562747  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 02:15:23.562787  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:15:23.563460  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:15:23.592672  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:15:23.620893  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:15:23.648810  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:15:23.677012  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:15:23.704430  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:15:23.736296  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:15:23.765295  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:15:23.796388  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:15:23.824848  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 02:15:23.852786  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 02:15:23.882944  361093 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:15:23.907836  361093 ssh_runner.go:195] Run: openssl version
	I0229 02:15:23.916052  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:15:23.930370  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.937378  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.937461  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.944482  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:15:23.956702  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 02:15:23.968559  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.974129  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.974207  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.980916  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 02:15:23.993131  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 02:15:24.005391  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.010645  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.010717  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.017160  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:15:24.029150  361093 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:15:24.033893  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:15:24.040509  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:15:24.047587  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:15:24.054651  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:15:24.061675  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:15:24.068724  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:15:24.075815  361093 kubeadm.go:404] StartCluster: {Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:24.075975  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 02:15:24.076030  361093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:15:24.117750  361093 cri.go:89] found id: "b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549"
	I0229 02:15:24.117784  361093 cri.go:89] found id: "42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630"
	I0229 02:15:24.117789  361093 cri.go:89] found id: "88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662"
	I0229 02:15:24.117793  361093 cri.go:89] found id: "a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348"
	I0229 02:15:24.117797  361093 cri.go:89] found id: "b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb"
	I0229 02:15:24.117806  361093 cri.go:89] found id: "05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4"
	I0229 02:15:24.117810  361093 cri.go:89] found id: "2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd"
	I0229 02:15:24.117814  361093 cri.go:89] found id: "8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3"
	I0229 02:15:24.117820  361093 cri.go:89] found id: ""
	I0229 02:15:24.117872  361093 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0229 02:15:24.132769  361093 cri.go:116] JSON = null
	W0229 02:15:24.132821  361093 kubeadm.go:411] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0229 02:15:24.132878  361093 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:15:24.143554  361093 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:15:24.143571  361093 kubeadm.go:636] restartCluster start
	I0229 02:15:24.143614  361093 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:15:24.154226  361093 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:24.154952  361093 kubeconfig.go:135] verify returned: extract IP: "embed-certs-665766" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:15:24.155312  361093 kubeconfig.go:146] "embed-certs-665766" context is missing from /home/jenkins/minikube-integration/18063-309085/kubeconfig - will repair!
	I0229 02:15:24.155887  361093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:24.157235  361093 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:15:24.167314  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:24.167357  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:24.183158  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:24.667580  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:24.667698  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:24.684726  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:25.168335  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:25.168431  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:25.186032  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:22.672998  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.173387  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.673270  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.173552  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.673074  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.173423  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.673502  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.173531  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.672644  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:27.173372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.737162  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:26.235726  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:24.782276  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:27.278368  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:25.667972  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:25.668059  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:25.683528  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:26.168096  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:26.168217  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:26.187348  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:26.667839  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:26.667920  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:26.681557  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.168163  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:27.168262  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:27.182779  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.667408  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:27.667531  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:27.685526  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:28.167636  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:28.167744  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:28.182746  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:28.668333  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:28.668407  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:28.682544  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:29.168119  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:29.168237  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:29.186304  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:29.667836  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:29.667914  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:29.682884  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:30.167618  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:30.167731  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:30.183089  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.672738  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.173326  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.673063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.173178  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.673323  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.173306  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.673429  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.172889  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.672643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:32.173215  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.239896  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:30.735621  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:32.736326  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:29.278986  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:31.777035  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:33.777456  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:30.667487  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:30.667592  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:30.685344  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:31.167811  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:31.167925  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:31.185254  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:31.667737  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:31.667837  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:31.681151  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:32.167727  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:32.167846  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:32.188215  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:32.667436  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:32.667540  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:32.683006  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:33.167461  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:33.167553  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:33.180891  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:33.667404  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:33.667497  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:33.686220  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:34.167884  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:34.167985  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:34.181808  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:34.181848  361093 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:15:34.181863  361093 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:15:34.181878  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0229 02:15:34.181945  361093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:15:34.226002  361093 cri.go:89] found id: "b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549"
	I0229 02:15:34.226036  361093 cri.go:89] found id: "42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630"
	I0229 02:15:34.226043  361093 cri.go:89] found id: "88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662"
	I0229 02:15:34.226048  361093 cri.go:89] found id: "a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348"
	I0229 02:15:34.226052  361093 cri.go:89] found id: "b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb"
	I0229 02:15:34.226058  361093 cri.go:89] found id: "05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4"
	I0229 02:15:34.226062  361093 cri.go:89] found id: "2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd"
	I0229 02:15:34.226067  361093 cri.go:89] found id: "8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3"
	I0229 02:15:34.226072  361093 cri.go:89] found id: ""
	I0229 02:15:34.226101  361093 cri.go:234] Stopping containers: [b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549 42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630 88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662 a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348 b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb 05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4 2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd 8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3]
	I0229 02:15:34.226179  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:15:34.230963  361093 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549 42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630 88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662 a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348 b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb 05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4 2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd 8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3
	I0229 02:15:34.280013  361093 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:15:34.303092  361093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:15:34.313538  361093 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:34.313601  361093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:34.324217  361093 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:34.324245  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:34.474732  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:32.672712  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.172874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.672874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.173296  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.673021  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.172643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.672743  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.172648  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.673171  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.172582  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.237112  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:37.240703  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:35.779547  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:37.779743  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:35.326453  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.551798  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.634250  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.722113  361093 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:15:35.722208  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.222305  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.723392  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.223304  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.251520  361093 api_server.go:72] duration metric: took 1.52940545s to wait for apiserver process to appear ...
	I0229 02:15:37.251556  361093 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:15:37.251583  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:37.252131  361093 api_server.go:269] stopped: https://192.168.39.252:8443/healthz: Get "https://192.168.39.252:8443/healthz": dial tcp 192.168.39.252:8443: connect: connection refused
	I0229 02:15:37.751668  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.172368  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:15:40.172411  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:15:40.172431  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.219812  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:15:40.219848  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:15:40.251758  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.277955  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:40.277987  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:40.751985  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.760486  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:40.760517  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:41.252018  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:41.266211  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:41.266256  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:41.751788  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:41.761815  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0229 02:15:41.772061  361093 api_server.go:141] control plane version: v1.28.4
	I0229 02:15:41.772105  361093 api_server.go:131] duration metric: took 4.520539314s to wait for apiserver health ...
	I0229 02:15:41.772119  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:41.772128  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:41.774160  361093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:15:37.672994  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.172969  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.673225  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.173291  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.673458  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.172766  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.672830  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.173174  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.672618  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:42.172606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.735965  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:41.737511  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:40.280036  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:42.777915  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:41.775526  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:15:41.792000  361093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:15:41.824077  361093 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:15:41.837796  361093 system_pods.go:59] 8 kube-system pods found
	I0229 02:15:41.837831  361093 system_pods.go:61] "coredns-5dd5756b68-jg9n5" [138dcd77-9fb3-4537-9459-87349af766d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:15:41.837839  361093 system_pods.go:61] "etcd-embed-certs-665766" [039cfea9-3fcf-4a51-85b9-63c0977c701f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:15:41.837847  361093 system_pods.go:61] "kube-apiserver-embed-certs-665766" [6cb7255e-9e43-4b01-a138-34734a11139b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:15:41.837854  361093 system_pods.go:61] "kube-controller-manager-embed-certs-665766" [aa50c4f2-0528-4366-bc5c-4b625ddbb3cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:15:41.837862  361093 system_pods.go:61] "kube-proxy-xctbw" [ab0177e6-72c5-4bdf-a6b4-fa28d0a500eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:15:41.837867  361093 system_pods.go:61] "kube-scheduler-embed-certs-665766" [0013ea0f-3fa3-426e-8e0f-709889bb7239] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:15:41.837873  361093 system_pods.go:61] "metrics-server-57f55c9bc5-9sdkl" [5d0edfb3-db05-4877-b2e1-b7dda944ee2e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:15:41.837878  361093 system_pods.go:61] "storage-provisioner" [1bfb386b-a55e-47c2-873c-894fb156094f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:15:41.837885  361093 system_pods.go:74] duration metric: took 13.782999ms to wait for pod list to return data ...
	I0229 02:15:41.837894  361093 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:15:41.846499  361093 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:15:41.846534  361093 node_conditions.go:123] node cpu capacity is 2
	I0229 02:15:41.846549  361093 node_conditions.go:105] duration metric: took 8.649228ms to run NodePressure ...
	I0229 02:15:41.846602  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:42.233849  361093 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:15:42.244135  361093 kubeadm.go:787] kubelet initialised
	I0229 02:15:42.244157  361093 kubeadm.go:788] duration metric: took 10.283459ms waiting for restarted kubelet to initialise ...
	I0229 02:15:42.244165  361093 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:15:42.251055  361093 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:44.258993  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:42.673016  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.173406  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.672843  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.173068  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.673562  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.172977  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.673254  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.172757  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.672796  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:47.173606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.738332  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:46.236882  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:44.778794  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:47.278336  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:46.760126  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:48.761905  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:47.673527  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.173283  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.673578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:48.673686  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:48.735531  360776 cri.go:89] found id: ""
	I0229 02:15:48.735560  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.735572  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:48.735580  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:48.735665  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:48.777775  360776 cri.go:89] found id: ""
	I0229 02:15:48.777801  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.777812  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:48.777819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:48.777893  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:48.816348  360776 cri.go:89] found id: ""
	I0229 02:15:48.816382  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.816391  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:48.816398  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:48.816466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:48.856576  360776 cri.go:89] found id: ""
	I0229 02:15:48.856627  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.856640  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:48.856648  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:48.856712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:48.896298  360776 cri.go:89] found id: ""
	I0229 02:15:48.896325  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.896333  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:48.896339  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:48.896419  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:48.939474  360776 cri.go:89] found id: ""
	I0229 02:15:48.939523  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.939537  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:48.939545  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:48.939609  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:48.979602  360776 cri.go:89] found id: ""
	I0229 02:15:48.979630  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.979642  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:48.979649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:48.979734  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:49.020455  360776 cri.go:89] found id: ""
	I0229 02:15:49.020485  360776 logs.go:276] 0 containers: []
	W0229 02:15:49.020495  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:49.020505  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:49.020517  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:49.070608  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:49.070653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:49.086878  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:49.086913  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:49.222506  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:49.222532  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:49.222565  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:49.261476  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:49.261507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:51.812576  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:51.828566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:51.828628  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:51.867885  360776 cri.go:89] found id: ""
	I0229 02:15:51.867913  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.867922  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:51.867928  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:51.867999  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:51.910828  360776 cri.go:89] found id: ""
	I0229 02:15:51.910862  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.910872  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:51.910879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:51.910928  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:51.951547  360776 cri.go:89] found id: ""
	I0229 02:15:51.951578  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.951590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:51.951598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:51.951683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:51.992485  360776 cri.go:89] found id: ""
	I0229 02:15:51.992511  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.992519  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:51.992525  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:51.992579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:52.036445  360776 cri.go:89] found id: ""
	I0229 02:15:52.036481  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.036494  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:52.036502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:52.036567  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:52.075247  360776 cri.go:89] found id: ""
	I0229 02:15:52.075279  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.075289  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:52.075298  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:52.075379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:52.117468  360776 cri.go:89] found id: ""
	I0229 02:15:52.117498  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.117507  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:52.117513  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:52.117575  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:52.156923  360776 cri.go:89] found id: ""
	I0229 02:15:52.156953  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.156962  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:52.156972  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:52.156984  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:52.209140  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:52.209181  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:52.224877  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:52.224952  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:52.313049  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:52.313079  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:52.313096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:48.237478  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:50.737111  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.737652  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:49.777365  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:51.778542  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:51.260945  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.758125  361093 pod_ready.go:92] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:52.758156  361093 pod_ready.go:81] duration metric: took 10.507075504s waiting for pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:52.758168  361093 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:54.767738  361093 pod_ready.go:102] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.361468  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:52.361520  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:54.934192  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:54.950604  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:54.950673  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:54.997665  360776 cri.go:89] found id: ""
	I0229 02:15:54.997700  360776 logs.go:276] 0 containers: []
	W0229 02:15:54.997713  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:54.997738  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:54.997824  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:55.043835  360776 cri.go:89] found id: ""
	I0229 02:15:55.043865  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.043878  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:55.043885  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:55.043952  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:55.084745  360776 cri.go:89] found id: ""
	I0229 02:15:55.084773  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.084784  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:55.084793  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:55.084857  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:55.126607  360776 cri.go:89] found id: ""
	I0229 02:15:55.126638  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.126652  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:55.126660  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:55.126723  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:55.168954  360776 cri.go:89] found id: ""
	I0229 02:15:55.168984  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.168997  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:55.169004  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:55.169068  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:55.209769  360776 cri.go:89] found id: ""
	I0229 02:15:55.209802  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.209813  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:55.209819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:55.209874  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:55.252174  360776 cri.go:89] found id: ""
	I0229 02:15:55.252206  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.252218  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:55.252226  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:55.252280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:55.301449  360776 cri.go:89] found id: ""
	I0229 02:15:55.301483  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.301496  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:55.301508  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:55.301524  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:55.406764  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:55.406785  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:55.406810  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:55.450166  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:55.450213  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:55.499652  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:55.499703  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:55.548616  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:55.548665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:54.738939  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:57.236199  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:54.278386  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:56.779465  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:55.767698  361093 pod_ready.go:92] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.767724  361093 pod_ready.go:81] duration metric: took 3.009548645s waiting for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.767733  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.777263  361093 pod_ready.go:92] pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.777303  361093 pod_ready.go:81] duration metric: took 9.561735ms waiting for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.777315  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.785388  361093 pod_ready.go:92] pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.785410  361093 pod_ready.go:81] duration metric: took 8.086257ms waiting for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.785420  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xctbw" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.791419  361093 pod_ready.go:92] pod "kube-proxy-xctbw" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.791437  361093 pod_ready.go:81] duration metric: took 6.009783ms waiting for pod "kube-proxy-xctbw" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.791448  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:56.799602  361093 pod_ready.go:92] pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:56.799631  361093 pod_ready.go:81] duration metric: took 1.008175236s waiting for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:56.799644  361093 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:58.807838  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:58.064634  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:58.080287  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:58.080365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:58.119448  360776 cri.go:89] found id: ""
	I0229 02:15:58.119480  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.119492  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:58.119500  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:58.119563  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:58.159896  360776 cri.go:89] found id: ""
	I0229 02:15:58.159926  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.159937  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:58.159945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:58.160009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:58.197746  360776 cri.go:89] found id: ""
	I0229 02:15:58.197774  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.197785  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:58.197794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:58.197873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:58.242003  360776 cri.go:89] found id: ""
	I0229 02:15:58.242031  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.242043  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:58.242051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:58.242143  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:58.282762  360776 cri.go:89] found id: ""
	I0229 02:15:58.282795  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.282815  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:58.282823  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:58.282889  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:58.324333  360776 cri.go:89] found id: ""
	I0229 02:15:58.324364  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.324374  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:58.324380  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:58.324436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:58.392279  360776 cri.go:89] found id: ""
	I0229 02:15:58.392308  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.392321  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:58.392329  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:58.392390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:58.448147  360776 cri.go:89] found id: ""
	I0229 02:15:58.448181  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.448194  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:58.448211  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:58.448259  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:58.501620  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:58.501657  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:58.519453  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:58.519486  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:58.595868  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:58.595897  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:58.595917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:58.630969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:58.631004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.181602  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:01.196379  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:01.196456  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:01.237984  360776 cri.go:89] found id: ""
	I0229 02:16:01.238008  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.238019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:01.238028  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:01.238109  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:01.284709  360776 cri.go:89] found id: ""
	I0229 02:16:01.284737  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.284748  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:01.284756  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:01.284829  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:01.328675  360776 cri.go:89] found id: ""
	I0229 02:16:01.328711  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.328724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:01.328732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:01.328787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:01.384088  360776 cri.go:89] found id: ""
	I0229 02:16:01.384118  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.384127  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:01.384133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:01.384182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:01.444582  360776 cri.go:89] found id: ""
	I0229 02:16:01.444617  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.444630  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:01.444638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:01.444709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:01.483202  360776 cri.go:89] found id: ""
	I0229 02:16:01.483237  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.483250  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:01.483258  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:01.483327  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:01.520422  360776 cri.go:89] found id: ""
	I0229 02:16:01.520455  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.520467  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:01.520475  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:01.520545  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:01.558295  360776 cri.go:89] found id: ""
	I0229 02:16:01.558327  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.558336  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:01.558348  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:01.558363  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:01.594473  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:01.594508  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.640865  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:01.640906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:01.691693  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:01.691746  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:01.708474  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:01.708507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:01.788334  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:59.237127  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.237269  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:59.278029  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.278662  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:03.280874  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.309386  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:03.807534  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:04.288565  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:04.304344  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:04.304435  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:04.364586  360776 cri.go:89] found id: ""
	I0229 02:16:04.364623  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.364635  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:04.364643  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:04.364712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:04.423593  360776 cri.go:89] found id: ""
	I0229 02:16:04.423624  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.423637  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:04.423646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:04.423715  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:04.463437  360776 cri.go:89] found id: ""
	I0229 02:16:04.463471  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.463482  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:04.463491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:04.463553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:04.500526  360776 cri.go:89] found id: ""
	I0229 02:16:04.500550  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.500559  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:04.500565  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:04.500646  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:04.541324  360776 cri.go:89] found id: ""
	I0229 02:16:04.541363  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.541376  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:04.541389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:04.541466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:04.586036  360776 cri.go:89] found id: ""
	I0229 02:16:04.586063  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.586071  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:04.586093  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:04.586221  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:04.624838  360776 cri.go:89] found id: ""
	I0229 02:16:04.624864  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.624873  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:04.624879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:04.624942  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:04.665188  360776 cri.go:89] found id: ""
	I0229 02:16:04.665214  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.665223  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:04.665235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:04.665248  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:04.710572  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:04.710608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:04.759440  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:04.759473  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:04.777220  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:04.777252  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:04.855773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:04.855802  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:04.855820  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:03.736436  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:06.238443  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:05.779438  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:08.279021  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:05.808060  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:08.307721  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:07.391235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:07.407347  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:07.407424  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:07.456950  360776 cri.go:89] found id: ""
	I0229 02:16:07.456978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.456988  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:07.456994  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:07.457056  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:07.501947  360776 cri.go:89] found id: ""
	I0229 02:16:07.501978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.501989  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:07.501996  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:07.502055  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:07.543248  360776 cri.go:89] found id: ""
	I0229 02:16:07.543283  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.543296  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:07.543303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:07.543369  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:07.580554  360776 cri.go:89] found id: ""
	I0229 02:16:07.580587  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.580599  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:07.580606  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:07.580674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:07.618930  360776 cri.go:89] found id: ""
	I0229 02:16:07.618955  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.618966  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:07.618974  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:07.619038  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:07.656206  360776 cri.go:89] found id: ""
	I0229 02:16:07.656237  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.656246  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:07.656252  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:07.656312  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:07.692225  360776 cri.go:89] found id: ""
	I0229 02:16:07.692255  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.692266  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:07.692273  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:07.692334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:07.728085  360776 cri.go:89] found id: ""
	I0229 02:16:07.728118  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.728130  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:07.728143  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:07.728161  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:07.744078  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:07.744102  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:07.819861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:07.819891  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:07.819906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:07.854665  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:07.854694  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:07.899029  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:07.899059  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.449274  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:10.466228  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:10.466305  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:10.516655  360776 cri.go:89] found id: ""
	I0229 02:16:10.516686  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.516699  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:10.516707  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:10.516776  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:10.551194  360776 cri.go:89] found id: ""
	I0229 02:16:10.551222  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.551240  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:10.551247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:10.551309  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:10.586984  360776 cri.go:89] found id: ""
	I0229 02:16:10.587012  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.587021  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:10.587033  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:10.587101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:10.631726  360776 cri.go:89] found id: ""
	I0229 02:16:10.631758  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.631768  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:10.631775  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:10.631831  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:10.673054  360776 cri.go:89] found id: ""
	I0229 02:16:10.673090  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.673102  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:10.673110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:10.673175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:10.716401  360776 cri.go:89] found id: ""
	I0229 02:16:10.716428  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.716437  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:10.716448  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:10.716495  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:10.762425  360776 cri.go:89] found id: ""
	I0229 02:16:10.762451  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.762460  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:10.762465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:10.762523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:10.800934  360776 cri.go:89] found id: ""
	I0229 02:16:10.800959  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.800970  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:10.800981  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:10.800995  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.851152  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:10.851178  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:10.865410  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:10.865436  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:10.941654  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:10.941679  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:10.941699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:10.977068  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:10.977099  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:08.736174  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.738304  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.779517  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:13.277888  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.308754  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:12.807138  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:14.807518  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:13.524032  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:13.540646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:13.540721  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:13.584696  360776 cri.go:89] found id: ""
	I0229 02:16:13.584727  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.584740  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:13.584748  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:13.584819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:13.620800  360776 cri.go:89] found id: ""
	I0229 02:16:13.620843  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.620852  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:13.620858  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:13.620936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:13.659179  360776 cri.go:89] found id: ""
	I0229 02:16:13.659209  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.659218  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:13.659224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:13.659286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:13.695772  360776 cri.go:89] found id: ""
	I0229 02:16:13.695821  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.695832  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:13.695840  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:13.695902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:13.736870  360776 cri.go:89] found id: ""
	I0229 02:16:13.736895  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.736906  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:13.736913  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:13.736978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:13.782101  360776 cri.go:89] found id: ""
	I0229 02:16:13.782131  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.782143  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:13.782151  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:13.782212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:13.822638  360776 cri.go:89] found id: ""
	I0229 02:16:13.822663  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.822672  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:13.822677  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:13.822741  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:13.861761  360776 cri.go:89] found id: ""
	I0229 02:16:13.861787  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.861798  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:13.861811  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:13.861835  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:13.877464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:13.877494  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:13.955485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:13.955512  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:13.955525  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:13.990560  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:13.990594  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:14.037740  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:14.037780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:16.588097  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:16.603732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:16.603810  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:16.644337  360776 cri.go:89] found id: ""
	I0229 02:16:16.644372  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.644393  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:16.644404  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:16.644474  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:16.687530  360776 cri.go:89] found id: ""
	I0229 02:16:16.687562  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.687575  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:16.687584  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:16.687653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:16.728007  360776 cri.go:89] found id: ""
	I0229 02:16:16.728037  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.728054  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:16.728063  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:16.728125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:16.770904  360776 cri.go:89] found id: ""
	I0229 02:16:16.770952  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.770964  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:16.770973  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:16.771041  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:16.812270  360776 cri.go:89] found id: ""
	I0229 02:16:16.812294  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.812303  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:16.812309  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:16.812358  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:16.854461  360776 cri.go:89] found id: ""
	I0229 02:16:16.854487  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.854495  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:16.854502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:16.854565  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:16.893048  360776 cri.go:89] found id: ""
	I0229 02:16:16.893081  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.893093  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:16.893102  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:16.893175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:16.934533  360776 cri.go:89] found id: ""
	I0229 02:16:16.934565  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.934576  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:16.934589  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:16.934608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:16.949773  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:16.949806  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:17.030457  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:17.030483  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:17.030500  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:17.066911  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:17.066947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:17.141648  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:17.141680  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:13.236967  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:15.736473  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:15.278216  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:17.280028  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:17.307756  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.308255  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.697967  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:19.713729  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:19.713786  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:19.757898  360776 cri.go:89] found id: ""
	I0229 02:16:19.757929  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.757940  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:19.757947  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:19.757998  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:19.807621  360776 cri.go:89] found id: ""
	I0229 02:16:19.807644  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.807652  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:19.807658  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:19.807704  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:19.846030  360776 cri.go:89] found id: ""
	I0229 02:16:19.846060  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.846071  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:19.846089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:19.846157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:19.881842  360776 cri.go:89] found id: ""
	I0229 02:16:19.881870  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.881883  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:19.881892  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:19.881955  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:19.917791  360776 cri.go:89] found id: ""
	I0229 02:16:19.917818  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.917830  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:19.917837  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:19.917922  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:19.954147  360776 cri.go:89] found id: ""
	I0229 02:16:19.954174  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.954186  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:19.954194  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:19.954259  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:19.991466  360776 cri.go:89] found id: ""
	I0229 02:16:19.991495  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.991505  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:19.991512  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:19.991566  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:20.032484  360776 cri.go:89] found id: ""
	I0229 02:16:20.032515  360776 logs.go:276] 0 containers: []
	W0229 02:16:20.032526  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:20.032540  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:20.032556  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:20.084743  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:20.084781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:20.105586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:20.105626  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:20.206486  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:20.206513  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:20.206528  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:20.250720  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:20.250748  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:18.235820  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:20.235852  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.237011  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.779151  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.278930  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:21.808852  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:24.307883  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.796158  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:22.812126  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:22.812208  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:22.849744  360776 cri.go:89] found id: ""
	I0229 02:16:22.849776  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.849792  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:22.849800  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:22.849865  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:22.891875  360776 cri.go:89] found id: ""
	I0229 02:16:22.891909  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.891921  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:22.891930  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:22.891995  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:22.931754  360776 cri.go:89] found id: ""
	I0229 02:16:22.931789  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.931801  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:22.931809  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:22.931878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:22.979291  360776 cri.go:89] found id: ""
	I0229 02:16:22.979322  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.979340  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:22.979349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:22.979437  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:23.028390  360776 cri.go:89] found id: ""
	I0229 02:16:23.028416  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.028424  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:23.028430  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:23.028498  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:23.077140  360776 cri.go:89] found id: ""
	I0229 02:16:23.077174  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.077187  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:23.077202  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:23.077274  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:23.124275  360776 cri.go:89] found id: ""
	I0229 02:16:23.124316  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.124326  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:23.124333  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:23.124386  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:23.188748  360776 cri.go:89] found id: ""
	I0229 02:16:23.188789  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.188801  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:23.188815  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:23.188833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:23.247833  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:23.247863  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:23.263866  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:23.263891  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:23.347825  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:23.347851  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:23.347869  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:23.383517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:23.383549  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:25.925662  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:25.940548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:25.940604  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:25.977087  360776 cri.go:89] found id: ""
	I0229 02:16:25.977107  360776 logs.go:276] 0 containers: []
	W0229 02:16:25.977116  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:25.977149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:25.977230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:26.018569  360776 cri.go:89] found id: ""
	I0229 02:16:26.018602  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.018615  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:26.018623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:26.018682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:26.057726  360776 cri.go:89] found id: ""
	I0229 02:16:26.057754  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.057773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:26.057782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:26.057838  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:26.097203  360776 cri.go:89] found id: ""
	I0229 02:16:26.097234  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.097247  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:26.097256  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:26.097322  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:26.141897  360776 cri.go:89] found id: ""
	I0229 02:16:26.141925  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.141941  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:26.141948  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:26.142009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:26.195074  360776 cri.go:89] found id: ""
	I0229 02:16:26.195101  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.195110  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:26.195117  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:26.195176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:26.252131  360776 cri.go:89] found id: ""
	I0229 02:16:26.252158  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.252166  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:26.252172  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:26.252249  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:26.292730  360776 cri.go:89] found id: ""
	I0229 02:16:26.292752  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.292760  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:26.292770  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:26.292781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:26.375138  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:26.375165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:26.375182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:26.410167  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:26.410196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:26.453622  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:26.453665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:26.503732  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:26.503762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:24.740152  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:27.236389  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:24.777323  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:26.778399  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:28.779480  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:26.308285  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:28.806555  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:29.018838  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:29.034894  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:29.034963  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:29.086433  360776 cri.go:89] found id: ""
	I0229 02:16:29.086460  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.086472  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:29.086481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:29.086562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:29.134575  360776 cri.go:89] found id: ""
	I0229 02:16:29.134606  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.134619  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:29.134627  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:29.134701  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:29.186372  360776 cri.go:89] found id: ""
	I0229 02:16:29.186408  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.186420  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:29.186427  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:29.186481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:29.236276  360776 cri.go:89] found id: ""
	I0229 02:16:29.236299  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.236306  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:29.236312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:29.236361  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:29.280342  360776 cri.go:89] found id: ""
	I0229 02:16:29.280371  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.280380  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:29.280389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:29.280461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:29.325017  360776 cri.go:89] found id: ""
	I0229 02:16:29.325047  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.325059  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:29.325068  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:29.325139  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:29.367912  360776 cri.go:89] found id: ""
	I0229 02:16:29.367941  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.367951  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:29.367957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:29.368021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:29.404499  360776 cri.go:89] found id: ""
	I0229 02:16:29.404528  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.404538  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:29.404548  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:29.404562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:29.419724  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:29.419755  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:29.501923  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:29.501952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:29.501971  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:29.536724  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:29.536762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:29.579709  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:29.579744  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.129825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:32.147723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:32.147815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:32.206978  360776 cri.go:89] found id: ""
	I0229 02:16:32.207016  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.207030  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:32.207041  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:32.207140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:32.265296  360776 cri.go:89] found id: ""
	I0229 02:16:32.265328  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.265341  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:32.265350  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:32.265418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:32.312827  360776 cri.go:89] found id: ""
	I0229 02:16:32.312862  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.312874  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:32.312882  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:32.312946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:29.736263  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.238217  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:31.277342  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:33.279528  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:30.806969  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.808795  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.359988  360776 cri.go:89] found id: ""
	I0229 02:16:32.360024  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.360036  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:32.360045  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:32.360106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:32.400969  360776 cri.go:89] found id: ""
	I0229 02:16:32.401003  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.401015  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:32.401022  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:32.401075  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:32.437371  360776 cri.go:89] found id: ""
	I0229 02:16:32.437402  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.437411  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:32.437419  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:32.437491  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:32.481199  360776 cri.go:89] found id: ""
	I0229 02:16:32.481227  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.481238  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:32.481247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:32.481329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:32.528100  360776 cri.go:89] found id: ""
	I0229 02:16:32.528137  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.528150  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:32.528163  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:32.528180  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:32.565087  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:32.565122  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:32.616350  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:32.616382  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.669978  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:32.670015  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:32.684373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:32.684399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:32.769992  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:35.270148  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:35.289949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:35.290050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:35.334051  360776 cri.go:89] found id: ""
	I0229 02:16:35.334091  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.334103  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:35.334112  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:35.334170  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:35.378536  360776 cri.go:89] found id: ""
	I0229 02:16:35.378571  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.378585  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:35.378594  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:35.378660  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:35.417867  360776 cri.go:89] found id: ""
	I0229 02:16:35.417894  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.417905  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:35.417914  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:35.417982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:35.455848  360776 cri.go:89] found id: ""
	I0229 02:16:35.455874  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.455887  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:35.455896  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:35.455964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:35.494787  360776 cri.go:89] found id: ""
	I0229 02:16:35.494814  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.494822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:35.494828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:35.494890  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:35.533553  360776 cri.go:89] found id: ""
	I0229 02:16:35.533583  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.533592  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:35.533600  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:35.533669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:35.581381  360776 cri.go:89] found id: ""
	I0229 02:16:35.581412  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.581422  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:35.581429  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:35.581494  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:35.619128  360776 cri.go:89] found id: ""
	I0229 02:16:35.619158  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.619169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:35.619181  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:35.619197  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:35.655180  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:35.655216  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:35.701558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:35.701585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:35.753639  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:35.753672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:35.769711  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:35.769743  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:35.843861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:34.735895  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:36.736525  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:35.280004  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:37.778345  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:35.308212  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:37.807970  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:38.345063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:38.361259  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:38.361345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:38.405901  360776 cri.go:89] found id: ""
	I0229 02:16:38.405936  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.405949  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:38.405958  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:38.406027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:38.447860  360776 cri.go:89] found id: ""
	I0229 02:16:38.447894  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.447907  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:38.447915  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:38.447983  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:38.489711  360776 cri.go:89] found id: ""
	I0229 02:16:38.489737  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.489746  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:38.489752  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:38.489815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:38.527094  360776 cri.go:89] found id: ""
	I0229 02:16:38.527120  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.527128  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:38.527135  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:38.527202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:38.564125  360776 cri.go:89] found id: ""
	I0229 02:16:38.564165  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.564175  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:38.564183  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:38.564257  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:38.604355  360776 cri.go:89] found id: ""
	I0229 02:16:38.604385  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.604394  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:38.604401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:38.604471  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:38.642291  360776 cri.go:89] found id: ""
	I0229 02:16:38.642329  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.642338  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:38.642345  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:38.642425  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:38.684559  360776 cri.go:89] found id: ""
	I0229 02:16:38.684605  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.684617  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:38.684632  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:38.684646  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:38.735189  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:38.735230  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:38.750359  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:38.750388  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:38.832749  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:38.832777  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:38.832793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:38.871321  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:38.871355  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.429960  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:41.445002  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:41.445081  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:41.487833  360776 cri.go:89] found id: ""
	I0229 02:16:41.487867  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.487880  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:41.487889  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:41.487953  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:41.527667  360776 cri.go:89] found id: ""
	I0229 02:16:41.527691  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.527700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:41.527706  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:41.527767  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:41.568252  360776 cri.go:89] found id: ""
	I0229 02:16:41.568279  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.568289  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:41.568295  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:41.568347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:41.606664  360776 cri.go:89] found id: ""
	I0229 02:16:41.606697  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.606709  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:41.606717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:41.606787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:41.643384  360776 cri.go:89] found id: ""
	I0229 02:16:41.643413  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.643425  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:41.643433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:41.643488  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:41.685132  360776 cri.go:89] found id: ""
	I0229 02:16:41.685165  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.685179  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:41.685188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:41.685255  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:41.725844  360776 cri.go:89] found id: ""
	I0229 02:16:41.725874  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.725888  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:41.725901  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:41.725959  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:41.764651  360776 cri.go:89] found id: ""
	I0229 02:16:41.764684  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.764710  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:41.764728  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:41.764745  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:41.846499  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:41.846520  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:41.846534  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:41.889415  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:41.889454  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.955514  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:41.955554  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:42.011187  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:42.011231  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:38.736997  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:40.737109  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:39.778387  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:41.780284  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:40.308479  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:42.807142  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.808770  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.528746  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:44.544657  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:44.544735  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:44.584593  360776 cri.go:89] found id: ""
	I0229 02:16:44.584619  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.584628  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:44.584634  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:44.584703  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:44.621819  360776 cri.go:89] found id: ""
	I0229 02:16:44.621851  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.621863  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:44.621870  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:44.621936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:44.661908  360776 cri.go:89] found id: ""
	I0229 02:16:44.661939  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.661951  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:44.661959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:44.662042  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:44.703135  360776 cri.go:89] found id: ""
	I0229 02:16:44.703168  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.703179  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:44.703186  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:44.703256  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:44.742783  360776 cri.go:89] found id: ""
	I0229 02:16:44.742812  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.742823  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:44.742831  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:44.742900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:44.786223  360776 cri.go:89] found id: ""
	I0229 02:16:44.786258  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.786271  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:44.786280  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:44.786348  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:44.832269  360776 cri.go:89] found id: ""
	I0229 02:16:44.832295  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.832304  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:44.832312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:44.832371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:44.882497  360776 cri.go:89] found id: ""
	I0229 02:16:44.882529  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.882541  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:44.882554  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:44.882572  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:44.898452  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:44.898484  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:44.988062  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:44.988089  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:44.988106  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:45.025317  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:45.025353  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:45.069804  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:45.069843  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:43.236422  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:45.236874  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:47.238514  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.277544  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:46.279502  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:48.280224  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:46.809509  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:49.307555  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:47.621890  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:47.636506  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:47.636572  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:47.679975  360776 cri.go:89] found id: ""
	I0229 02:16:47.680007  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.680019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:47.680026  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:47.680099  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:47.720573  360776 cri.go:89] found id: ""
	I0229 02:16:47.720604  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.720616  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:47.720628  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:47.720693  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:47.762211  360776 cri.go:89] found id: ""
	I0229 02:16:47.762239  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.762256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:47.762264  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:47.762325  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:47.801703  360776 cri.go:89] found id: ""
	I0229 02:16:47.801726  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.801736  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:47.801745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:47.801804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:47.843036  360776 cri.go:89] found id: ""
	I0229 02:16:47.843065  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.843074  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:47.843087  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:47.843137  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:47.901986  360776 cri.go:89] found id: ""
	I0229 02:16:47.902016  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.902029  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:47.902037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:47.902115  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:47.970578  360776 cri.go:89] found id: ""
	I0229 02:16:47.970626  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.970638  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:47.970646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:47.970727  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:48.008245  360776 cri.go:89] found id: ""
	I0229 02:16:48.008280  360776 logs.go:276] 0 containers: []
	W0229 02:16:48.008290  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:48.008303  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:48.008318  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:48.059243  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:48.059277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:48.109287  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:48.109328  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:48.124720  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:48.124747  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:48.201686  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:48.201734  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:48.201750  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:50.740237  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:50.755100  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:50.755174  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:50.799284  360776 cri.go:89] found id: ""
	I0229 02:16:50.799304  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.799312  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:50.799318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:50.799367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:50.863582  360776 cri.go:89] found id: ""
	I0229 02:16:50.863617  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.863630  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:50.863638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:50.863709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:50.913067  360776 cri.go:89] found id: ""
	I0229 02:16:50.913097  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.913107  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:50.913114  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:50.913181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:50.964343  360776 cri.go:89] found id: ""
	I0229 02:16:50.964372  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.964381  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:50.964387  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:50.964443  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:51.008180  360776 cri.go:89] found id: ""
	I0229 02:16:51.008215  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.008226  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:51.008234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:51.008314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:51.050574  360776 cri.go:89] found id: ""
	I0229 02:16:51.050604  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.050613  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:51.050619  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:51.050682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:51.094144  360776 cri.go:89] found id: ""
	I0229 02:16:51.094170  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.094180  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:51.094187  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:51.094254  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:51.133928  360776 cri.go:89] found id: ""
	I0229 02:16:51.133963  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.133976  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:51.133989  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:51.134005  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:51.169857  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:51.169888  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:51.211739  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:51.211774  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:51.267237  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:51.267277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:51.285167  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:51.285200  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:51.361051  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:49.736852  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:52.235969  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:50.781150  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.277926  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:51.307606  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.308568  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.861859  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:53.879047  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:53.879124  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:53.931722  360776 cri.go:89] found id: ""
	I0229 02:16:53.931751  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.931761  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:53.931770  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:53.931843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:53.989223  360776 cri.go:89] found id: ""
	I0229 02:16:53.989250  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.989259  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:53.989266  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:53.989316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:54.029340  360776 cri.go:89] found id: ""
	I0229 02:16:54.029367  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.029379  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:54.029394  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:54.029455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:54.065032  360776 cri.go:89] found id: ""
	I0229 02:16:54.065061  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.065072  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:54.065081  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:54.065148  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:54.103739  360776 cri.go:89] found id: ""
	I0229 02:16:54.103771  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.103783  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:54.103791  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:54.103886  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:54.146653  360776 cri.go:89] found id: ""
	I0229 02:16:54.146706  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.146720  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:54.146728  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:54.146804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:54.183885  360776 cri.go:89] found id: ""
	I0229 02:16:54.183909  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.183917  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:54.183923  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:54.183985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:54.223712  360776 cri.go:89] found id: ""
	I0229 02:16:54.223739  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.223748  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:54.223758  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:54.223776  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:54.239418  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:54.239443  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:54.316236  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:54.316262  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:54.316278  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:54.351899  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:54.351933  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:54.396954  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:54.396990  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:56.949058  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:56.965888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:56.965966  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:57.010067  360776 cri.go:89] found id: ""
	I0229 02:16:57.010114  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.010127  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:57.010136  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:57.010199  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:57.048082  360776 cri.go:89] found id: ""
	I0229 02:16:57.048108  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.048116  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:57.048123  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:57.048172  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:57.082859  360776 cri.go:89] found id: ""
	I0229 02:16:57.082890  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.082903  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:57.082910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:57.082971  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:57.118291  360776 cri.go:89] found id: ""
	I0229 02:16:57.118321  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.118331  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:57.118338  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:57.118396  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:57.155920  360776 cri.go:89] found id: ""
	I0229 02:16:57.155945  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.155954  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:57.155960  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:57.156007  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:57.198460  360776 cri.go:89] found id: ""
	I0229 02:16:57.198494  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.198503  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:57.198515  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:57.198576  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:57.239178  360776 cri.go:89] found id: ""
	I0229 02:16:57.239206  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.239214  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:57.239220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:57.239267  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:57.280933  360776 cri.go:89] found id: ""
	I0229 02:16:57.280964  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.280977  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:57.280988  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:57.281004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:57.341023  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:57.341056  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:54.237542  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:56.736019  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:55.778328  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:58.281018  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:55.309863  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:57.311910  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:59.807723  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:57.356053  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:57.356083  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:57.435017  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:57.435040  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:57.435057  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:57.472428  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:57.472461  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:00.020707  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:00.035406  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:00.035476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:00.072190  360776 cri.go:89] found id: ""
	I0229 02:17:00.072222  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.072231  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:00.072237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:00.072289  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:00.108829  360776 cri.go:89] found id: ""
	I0229 02:17:00.108857  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.108868  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:00.108875  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:00.108927  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:00.143429  360776 cri.go:89] found id: ""
	I0229 02:17:00.143450  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.143459  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:00.143465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:00.143512  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:00.180428  360776 cri.go:89] found id: ""
	I0229 02:17:00.180456  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.180467  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:00.180496  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:00.180564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:00.220115  360776 cri.go:89] found id: ""
	I0229 02:17:00.220143  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.220155  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:00.220163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:00.220220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:00.258851  360776 cri.go:89] found id: ""
	I0229 02:17:00.258877  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.258887  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:00.258895  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:00.258982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:00.304148  360776 cri.go:89] found id: ""
	I0229 02:17:00.304174  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.304185  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:00.304193  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:00.304277  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:00.345893  360776 cri.go:89] found id: ""
	I0229 02:17:00.345923  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.345935  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:00.345950  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:00.345965  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:00.395977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:00.396006  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:00.410948  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:00.410970  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:00.485724  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:00.485745  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:00.485760  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:00.520496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:00.520531  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:59.236302  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:01.237806  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:00.777736  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.280794  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:01.807808  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.818535  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.065669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:03.081434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:03.081496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:03.118752  360776 cri.go:89] found id: ""
	I0229 02:17:03.118779  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.118788  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:03.118794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:03.118870  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:03.156172  360776 cri.go:89] found id: ""
	I0229 02:17:03.156197  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.156209  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:03.156216  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:03.156285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:03.190792  360776 cri.go:89] found id: ""
	I0229 02:17:03.190815  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.190823  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:03.190829  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:03.190885  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:03.229692  360776 cri.go:89] found id: ""
	I0229 02:17:03.229721  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.229733  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:03.229741  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:03.229800  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:03.271014  360776 cri.go:89] found id: ""
	I0229 02:17:03.271044  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.271053  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:03.271058  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:03.271118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:03.315291  360776 cri.go:89] found id: ""
	I0229 02:17:03.315316  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.315325  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:03.315332  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:03.315390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:03.354974  360776 cri.go:89] found id: ""
	I0229 02:17:03.354998  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.355007  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:03.355014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:03.355091  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:03.394044  360776 cri.go:89] found id: ""
	I0229 02:17:03.394074  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.394101  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:03.394120  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:03.394138  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:03.430131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:03.430164  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.472760  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:03.472793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:03.522797  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:03.522837  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:03.538642  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:03.538672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:03.611189  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.112319  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:06.126843  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:06.126924  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:06.171970  360776 cri.go:89] found id: ""
	I0229 02:17:06.171995  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.172005  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:06.172011  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:06.172060  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:06.208082  360776 cri.go:89] found id: ""
	I0229 02:17:06.208114  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.208126  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:06.208133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:06.208211  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:06.246429  360776 cri.go:89] found id: ""
	I0229 02:17:06.246454  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.246465  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:06.246472  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:06.246521  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:06.286908  360776 cri.go:89] found id: ""
	I0229 02:17:06.286941  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.286952  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:06.286959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:06.287036  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:06.330632  360776 cri.go:89] found id: ""
	I0229 02:17:06.330664  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.330707  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:06.330720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:06.330793  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:06.368385  360776 cri.go:89] found id: ""
	I0229 02:17:06.368412  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.368423  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:06.368431  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:06.368499  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:06.407424  360776 cri.go:89] found id: ""
	I0229 02:17:06.407456  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.407468  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:06.407476  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:06.407542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:06.447043  360776 cri.go:89] found id: ""
	I0229 02:17:06.447072  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.447084  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:06.447098  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:06.447119  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:06.501604  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:06.501639  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:06.516247  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:06.516274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:06.593087  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.593112  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:06.593126  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:06.633057  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:06.633097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.735552  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:05.735757  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:07.736746  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:05.777670  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:07.779116  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:06.308986  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:08.808349  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:09.202624  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:09.218424  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:09.218496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:09.264508  360776 cri.go:89] found id: ""
	I0229 02:17:09.264538  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.264551  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:09.264560  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:09.264652  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:09.304507  360776 cri.go:89] found id: ""
	I0229 02:17:09.304536  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.304547  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:09.304555  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:09.304619  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:09.354779  360776 cri.go:89] found id: ""
	I0229 02:17:09.354802  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.354811  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:09.354817  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:09.354866  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:09.390031  360776 cri.go:89] found id: ""
	I0229 02:17:09.390065  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.390097  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:09.390106  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:09.390182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:09.435618  360776 cri.go:89] found id: ""
	I0229 02:17:09.435652  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.435666  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:09.435674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:09.435757  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:09.479110  360776 cri.go:89] found id: ""
	I0229 02:17:09.479142  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.479154  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:09.479163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:09.479236  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:09.520748  360776 cri.go:89] found id: ""
	I0229 02:17:09.520781  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.520794  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:09.520802  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:09.520879  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:09.561536  360776 cri.go:89] found id: ""
	I0229 02:17:09.561576  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.561590  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:09.561611  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:09.561628  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:09.621631  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:09.621678  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:09.640562  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:09.640607  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:09.727979  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:09.728001  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:09.728013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:09.766305  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:09.766340  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:12.312841  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:12.329745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:12.329826  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:10.236840  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.736224  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:09.779304  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.277545  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:11.308061  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:13.808929  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.376185  360776 cri.go:89] found id: ""
	I0229 02:17:12.376218  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.376230  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:12.376240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:12.376317  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:12.417025  360776 cri.go:89] found id: ""
	I0229 02:17:12.417059  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.417068  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:12.417080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:12.417162  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:12.458973  360776 cri.go:89] found id: ""
	I0229 02:17:12.459018  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.459040  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:12.459048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:12.459116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:12.500063  360776 cri.go:89] found id: ""
	I0229 02:17:12.500090  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.500102  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:12.500110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:12.500177  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:12.543182  360776 cri.go:89] found id: ""
	I0229 02:17:12.543213  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.543225  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:12.543234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:12.543296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:12.584725  360776 cri.go:89] found id: ""
	I0229 02:17:12.584773  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.584796  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:12.584804  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:12.584873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:12.634212  360776 cri.go:89] found id: ""
	I0229 02:17:12.634244  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.634256  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:12.634263  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:12.634330  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:12.686103  360776 cri.go:89] found id: ""
	I0229 02:17:12.686134  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.686144  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:12.686154  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:12.686168  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:12.753950  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:12.753999  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:12.769400  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:12.769430  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:12.856362  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:12.856390  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:12.856408  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:12.893238  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:12.893274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:15.439069  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:15.455698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:15.455779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:15.501222  360776 cri.go:89] found id: ""
	I0229 02:17:15.501248  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.501262  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:15.501269  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:15.501331  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:15.544580  360776 cri.go:89] found id: ""
	I0229 02:17:15.544610  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.544623  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:15.544632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:15.544697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:15.587250  360776 cri.go:89] found id: ""
	I0229 02:17:15.587301  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.587314  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:15.587322  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:15.587392  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:15.660189  360776 cri.go:89] found id: ""
	I0229 02:17:15.660214  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.660223  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:15.660229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:15.660280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:15.715100  360776 cri.go:89] found id: ""
	I0229 02:17:15.715126  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.715136  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:15.715142  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:15.715203  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:15.758998  360776 cri.go:89] found id: ""
	I0229 02:17:15.759028  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.759047  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:15.759053  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:15.759118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:15.801175  360776 cri.go:89] found id: ""
	I0229 02:17:15.801203  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.801215  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:15.801224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:15.801294  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:15.849643  360776 cri.go:89] found id: ""
	I0229 02:17:15.849678  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.849690  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:15.849704  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:15.849724  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:15.864824  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:15.864856  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:15.937271  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:15.937299  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:15.937313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:15.976404  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:15.976448  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:16.025658  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:16.025697  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:15.235863  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:17.237685  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:14.279268  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:16.280226  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.779746  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:16.307548  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.806653  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.574763  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:18.593695  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:18.593802  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:18.641001  360776 cri.go:89] found id: ""
	I0229 02:17:18.641033  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.641042  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:18.641048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:18.641106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:18.701580  360776 cri.go:89] found id: ""
	I0229 02:17:18.701608  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.701617  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:18.701623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:18.701674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:18.742596  360776 cri.go:89] found id: ""
	I0229 02:17:18.742632  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.742642  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:18.742649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:18.742712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:18.782404  360776 cri.go:89] found id: ""
	I0229 02:17:18.782432  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.782443  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:18.782451  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:18.782516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:18.826221  360776 cri.go:89] found id: ""
	I0229 02:17:18.826250  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.826262  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:18.826270  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:18.826354  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:18.864698  360776 cri.go:89] found id: ""
	I0229 02:17:18.864737  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.864746  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:18.864766  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:18.864819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:18.902681  360776 cri.go:89] found id: ""
	I0229 02:17:18.902708  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.902718  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:18.902723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:18.902835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:18.942178  360776 cri.go:89] found id: ""
	I0229 02:17:18.942203  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.942213  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:18.942223  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:18.942236  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:18.983914  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:18.983947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:19.041670  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:19.041710  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:19.057445  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:19.057475  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:19.128946  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:19.128974  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:19.129007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:21.664806  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:21.680938  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:21.681037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:21.737776  360776 cri.go:89] found id: ""
	I0229 02:17:21.737808  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.737825  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:21.737833  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:21.737913  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:21.778917  360776 cri.go:89] found id: ""
	I0229 02:17:21.778951  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.778962  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:21.778969  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:21.779033  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:21.819099  360776 cri.go:89] found id: ""
	I0229 02:17:21.819127  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.819139  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:21.819147  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:21.819230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:21.861290  360776 cri.go:89] found id: ""
	I0229 02:17:21.861323  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.861334  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:21.861342  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:21.861406  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:21.900886  360776 cri.go:89] found id: ""
	I0229 02:17:21.900926  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.900938  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:21.900946  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:21.901021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:21.943023  360776 cri.go:89] found id: ""
	I0229 02:17:21.943060  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.943072  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:21.943080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:21.943145  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:21.984305  360776 cri.go:89] found id: ""
	I0229 02:17:21.984341  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.984352  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:21.984360  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:21.984428  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:22.025326  360776 cri.go:89] found id: ""
	I0229 02:17:22.025356  360776 logs.go:276] 0 containers: []
	W0229 02:17:22.025368  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:22.025382  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:22.025398  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:22.074977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:22.075020  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:22.092483  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:22.092518  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:22.171791  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:22.171814  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:22.171833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:22.211794  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:22.211850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:19.736684  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:21.737510  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:21.278089  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:23.278374  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:20.808574  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:23.307697  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:24.758800  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:24.773418  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:24.773501  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:24.819487  360776 cri.go:89] found id: ""
	I0229 02:17:24.819520  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.819531  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:24.819540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:24.819605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:24.859906  360776 cri.go:89] found id: ""
	I0229 02:17:24.859938  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.859949  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:24.859957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:24.860022  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:24.897499  360776 cri.go:89] found id: ""
	I0229 02:17:24.897531  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.897540  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:24.897547  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:24.897622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:24.935346  360776 cri.go:89] found id: ""
	I0229 02:17:24.935380  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.935393  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:24.935401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:24.935468  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:24.973567  360776 cri.go:89] found id: ""
	I0229 02:17:24.973591  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.973600  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:24.973605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:24.973657  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:25.016166  360776 cri.go:89] found id: ""
	I0229 02:17:25.016198  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.016210  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:25.016217  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:25.016285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:25.059944  360776 cri.go:89] found id: ""
	I0229 02:17:25.059977  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.059991  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:25.059999  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:25.060057  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:25.101594  360776 cri.go:89] found id: ""
	I0229 02:17:25.101627  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.101639  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:25.101652  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:25.101672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:25.183940  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:25.183988  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:25.184007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:25.219286  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:25.219327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:25.267048  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:25.267107  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:25.320969  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:25.320998  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:24.236957  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:26.736244  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:25.278532  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.777655  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:25.308061  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.806994  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.846314  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:27.861349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:27.861416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:27.901126  360776 cri.go:89] found id: ""
	I0229 02:17:27.901153  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.901162  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:27.901169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:27.901220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:27.942692  360776 cri.go:89] found id: ""
	I0229 02:17:27.942725  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.942738  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:27.942745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:27.942803  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:27.978891  360776 cri.go:89] found id: ""
	I0229 02:17:27.978919  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.978928  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:27.978934  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:27.978991  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:28.019688  360776 cri.go:89] found id: ""
	I0229 02:17:28.019723  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.019735  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:28.019743  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:28.019799  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:28.056414  360776 cri.go:89] found id: ""
	I0229 02:17:28.056438  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.056451  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:28.056457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:28.056504  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:28.093691  360776 cri.go:89] found id: ""
	I0229 02:17:28.093727  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.093739  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:28.093747  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:28.093806  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:28.130737  360776 cri.go:89] found id: ""
	I0229 02:17:28.130761  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.130768  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:28.130774  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:28.130828  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:28.167783  360776 cri.go:89] found id: ""
	I0229 02:17:28.167810  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.167820  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:28.167832  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:28.167850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:28.248054  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:28.248080  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:28.248096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:28.284935  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:28.284963  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:28.328563  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:28.328605  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:28.379372  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:28.379412  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:30.896570  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:30.912070  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:30.912140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:30.951633  360776 cri.go:89] found id: ""
	I0229 02:17:30.951662  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.951674  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:30.951681  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:30.951725  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:30.988094  360776 cri.go:89] found id: ""
	I0229 02:17:30.988121  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.988133  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:30.988141  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:30.988197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:31.025379  360776 cri.go:89] found id: ""
	I0229 02:17:31.025405  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.025416  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:31.025423  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:31.025476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:31.064070  360776 cri.go:89] found id: ""
	I0229 02:17:31.064100  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.064112  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:31.064120  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:31.064178  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:31.106455  360776 cri.go:89] found id: ""
	I0229 02:17:31.106487  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.106498  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:31.106505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:31.106564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:31.141789  360776 cri.go:89] found id: ""
	I0229 02:17:31.141819  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.141830  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:31.141838  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:31.141985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:31.181781  360776 cri.go:89] found id: ""
	I0229 02:17:31.181807  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.181815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:31.181820  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:31.181877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:31.222653  360776 cri.go:89] found id: ""
	I0229 02:17:31.222687  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.222700  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:31.222713  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:31.222730  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:31.272067  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:31.272100  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:31.287890  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:31.287917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:31.370516  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:31.370545  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:31.370559  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:31.416216  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:31.416257  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:29.235795  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.237540  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.729967  360079 pod_ready.go:81] duration metric: took 4m0.001042569s waiting for pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace to be "Ready" ...
	E0229 02:17:31.729999  360079 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:17:31.730022  360079 pod_ready.go:38] duration metric: took 4m13.043743347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:31.730062  360079 kubeadm.go:640] restartCluster took 4m31.356459787s
	W0229 02:17:31.730347  360079 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:17:31.730404  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:17:29.777918  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.778158  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:30.307297  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:32.307846  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:34.309842  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:33.976724  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:33.991119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:33.991202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:34.038632  360776 cri.go:89] found id: ""
	I0229 02:17:34.038659  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.038668  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:34.038674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:34.038744  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:34.076069  360776 cri.go:89] found id: ""
	I0229 02:17:34.076109  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.076120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:34.076128  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:34.076212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:34.122220  360776 cri.go:89] found id: ""
	I0229 02:17:34.122246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.122256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:34.122265  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:34.122329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:34.163216  360776 cri.go:89] found id: ""
	I0229 02:17:34.163246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.163259  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:34.163268  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:34.163337  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:34.206631  360776 cri.go:89] found id: ""
	I0229 02:17:34.206679  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.206691  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:34.206698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:34.206766  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:34.250992  360776 cri.go:89] found id: ""
	I0229 02:17:34.251024  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.251037  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:34.251048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:34.251116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:34.289582  360776 cri.go:89] found id: ""
	I0229 02:17:34.289609  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.289620  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:34.289626  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:34.289690  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:34.335130  360776 cri.go:89] found id: ""
	I0229 02:17:34.335158  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.335169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:34.335182  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:34.335198  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:34.365870  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:34.365920  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:34.462536  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:34.462567  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:34.462585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:34.500235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:34.500281  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:34.551106  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:34.551146  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.104547  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:37.123303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:37.123367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:37.164350  360776 cri.go:89] found id: ""
	I0229 02:17:37.164378  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.164391  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:37.164401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:37.164466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:37.209965  360776 cri.go:89] found id: ""
	I0229 02:17:37.210000  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.210014  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:37.210023  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:37.210125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:37.253162  360776 cri.go:89] found id: ""
	I0229 02:17:37.253192  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.253205  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:37.253213  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:37.253293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:37.300836  360776 cri.go:89] found id: ""
	I0229 02:17:37.300862  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.300872  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:37.300880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:37.300944  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:37.343546  360776 cri.go:89] found id: ""
	I0229 02:17:37.343573  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.343585  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:37.343598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:37.343669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:37.044032  360079 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.313599592s)
	I0229 02:17:37.044103  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:37.062591  360079 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:17:37.074885  360079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:17:37.086583  360079 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:17:37.086639  360079 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:17:37.155776  360079 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 02:17:37.155861  360079 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:17:37.340395  360079 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:17:37.340526  360079 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:17:37.340643  360079 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:17:37.578733  360079 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:17:37.580576  360079 out.go:204]   - Generating certificates and keys ...
	I0229 02:17:37.580753  360079 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:17:37.580872  360079 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:17:37.580986  360079 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:17:37.581082  360079 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:17:37.581187  360079 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:17:37.581416  360079 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:17:37.581969  360079 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:17:37.582241  360079 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:17:37.582871  360079 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:17:37.583233  360079 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:17:37.583541  360079 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:17:37.583596  360079 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:17:37.843311  360079 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:17:37.914504  360079 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 02:17:38.039892  360079 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:17:38.271953  360079 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:17:38.514979  360079 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:17:38.515587  360079 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:17:38.518101  360079 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:17:34.279682  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:36.283111  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:38.780078  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:36.807145  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:39.305997  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:37.407526  360776 cri.go:89] found id: ""
	I0229 02:17:37.407554  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.407567  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:37.407574  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:37.407642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:37.486848  360776 cri.go:89] found id: ""
	I0229 02:17:37.486890  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.486902  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:37.486910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:37.486978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:37.529152  360776 cri.go:89] found id: ""
	I0229 02:17:37.529187  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.529199  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:37.529221  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:37.529238  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.594611  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:37.594642  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:37.612946  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:37.612980  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:37.697527  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:37.697552  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:37.697568  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:37.737130  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:37.737165  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.285260  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:40.302884  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:40.302962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:40.346431  360776 cri.go:89] found id: ""
	I0229 02:17:40.346463  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.346474  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:40.346481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:40.346547  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:40.403100  360776 cri.go:89] found id: ""
	I0229 02:17:40.403132  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.403147  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:40.403154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:40.403223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:40.466390  360776 cri.go:89] found id: ""
	I0229 02:17:40.466424  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.466435  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:40.466444  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:40.466516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:40.509811  360776 cri.go:89] found id: ""
	I0229 02:17:40.509840  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.509851  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:40.509859  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:40.509918  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:40.546249  360776 cri.go:89] found id: ""
	I0229 02:17:40.546281  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.546294  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:40.546302  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:40.546366  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:40.584490  360776 cri.go:89] found id: ""
	I0229 02:17:40.584520  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.584532  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:40.584540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:40.584602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:40.628397  360776 cri.go:89] found id: ""
	I0229 02:17:40.628427  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.628439  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:40.628447  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:40.628508  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:40.675557  360776 cri.go:89] found id: ""
	I0229 02:17:40.675584  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.675593  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:40.675603  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:40.675616  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:40.762140  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:40.762167  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:40.762192  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:40.808405  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:40.808444  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.860511  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:40.860553  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:40.929977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:40.930013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:38.519654  360079 out.go:204]   - Booting up control plane ...
	I0229 02:17:38.519770  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:17:38.520351  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:17:38.523272  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:17:38.545603  360079 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:17:38.547015  360079 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:17:38.547133  360079 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:17:38.713788  360079 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:17:40.780376  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:43.278958  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:41.308561  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:43.308710  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:44.718240  360079 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003956 seconds
	I0229 02:17:44.736859  360079 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:17:44.755878  360079 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:17:45.285373  360079 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:17:45.285648  360079 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-907398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:17:45.797261  360079 kubeadm.go:322] [bootstrap-token] Using token: 32tkap.hl2tmrs81t324g78
	I0229 02:17:45.798858  360079 out.go:204]   - Configuring RBAC rules ...
	I0229 02:17:45.798996  360079 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:17:45.805734  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:17:45.814737  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:17:45.818516  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:17:45.823668  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:17:45.827430  360079 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:17:45.842656  360079 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:17:46.096543  360079 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:17:46.292966  360079 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:17:46.293952  360079 kubeadm.go:322] 
	I0229 02:17:46.294055  360079 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:17:46.294075  360079 kubeadm.go:322] 
	I0229 02:17:46.294188  360079 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:17:46.294199  360079 kubeadm.go:322] 
	I0229 02:17:46.294231  360079 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:17:46.294314  360079 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:17:46.294432  360079 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:17:46.294454  360079 kubeadm.go:322] 
	I0229 02:17:46.294528  360079 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:17:46.294547  360079 kubeadm.go:322] 
	I0229 02:17:46.294635  360079 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:17:46.294657  360079 kubeadm.go:322] 
	I0229 02:17:46.294720  360079 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:17:46.294864  360079 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:17:46.294948  360079 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:17:46.294959  360079 kubeadm.go:322] 
	I0229 02:17:46.295078  360079 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:17:46.295174  360079 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:17:46.295185  360079 kubeadm.go:322] 
	I0229 02:17:46.295297  360079 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 32tkap.hl2tmrs81t324g78 \
	I0229 02:17:46.295404  360079 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:17:46.295441  360079 kubeadm.go:322] 	--control-plane 
	I0229 02:17:46.295448  360079 kubeadm.go:322] 
	I0229 02:17:46.295583  360079 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:17:46.295605  360079 kubeadm.go:322] 
	I0229 02:17:46.295770  360079 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 32tkap.hl2tmrs81t324g78 \
	I0229 02:17:46.295933  360079 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:17:46.298233  360079 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:17:46.298273  360079 cni.go:84] Creating CNI manager for ""
	I0229 02:17:46.298290  360079 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:17:46.300109  360079 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:17:43.449607  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:43.466367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:43.466441  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:43.504826  360776 cri.go:89] found id: ""
	I0229 02:17:43.504861  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.504873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:43.504880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:43.504946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:43.548641  360776 cri.go:89] found id: ""
	I0229 02:17:43.548682  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.548693  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:43.548701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:43.548760  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:43.591044  360776 cri.go:89] found id: ""
	I0229 02:17:43.591075  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.591085  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:43.591092  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:43.591152  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:43.639237  360776 cri.go:89] found id: ""
	I0229 02:17:43.639261  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.639269  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:43.639275  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:43.639329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:43.677231  360776 cri.go:89] found id: ""
	I0229 02:17:43.677264  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.677277  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:43.677285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:43.677359  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:43.721264  360776 cri.go:89] found id: ""
	I0229 02:17:43.721295  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.721306  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:43.721314  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:43.721379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:43.757248  360776 cri.go:89] found id: ""
	I0229 02:17:43.757281  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.757293  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:43.757300  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:43.757365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:43.802304  360776 cri.go:89] found id: ""
	I0229 02:17:43.802332  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.802343  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:43.802359  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:43.802375  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:43.855921  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:43.855949  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:43.869586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:43.869623  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:43.945526  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:43.945562  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:43.945579  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:43.987179  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:43.987215  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:46.537504  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:46.556578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:46.556653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:46.603983  360776 cri.go:89] found id: ""
	I0229 02:17:46.604012  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.604025  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:46.604037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:46.604107  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:46.657708  360776 cri.go:89] found id: ""
	I0229 02:17:46.657736  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.657747  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:46.657754  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:46.657820  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:46.708795  360776 cri.go:89] found id: ""
	I0229 02:17:46.708830  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.708843  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:46.708852  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:46.708920  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:46.758013  360776 cri.go:89] found id: ""
	I0229 02:17:46.758043  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.758056  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:46.758064  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:46.758157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:46.813107  360776 cri.go:89] found id: ""
	I0229 02:17:46.813138  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.813149  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:46.813156  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:46.813219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:46.859040  360776 cri.go:89] found id: ""
	I0229 02:17:46.859070  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.859081  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:46.859089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:46.859154  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:46.905302  360776 cri.go:89] found id: ""
	I0229 02:17:46.905334  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.905346  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:46.905354  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:46.905416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:46.950465  360776 cri.go:89] found id: ""
	I0229 02:17:46.950491  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.950502  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:46.950515  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:46.950530  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:47.035016  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:47.035044  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:47.035062  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:47.074108  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:47.074140  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:47.122149  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:47.122183  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:47.187233  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:47.187283  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:46.301876  360079 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:17:46.328857  360079 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:17:46.365095  360079 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:17:46.365210  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:46.365239  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=no-preload-907398 minikube.k8s.io/updated_at=2024_02_29T02_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:46.445475  360079 ops.go:34] apiserver oom_adj: -16
	I0229 02:17:46.712653  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:47.213595  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:47.713471  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:45.279713  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:47.778580  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:45.309019  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:47.808652  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:49.708451  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:49.727327  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:49.727383  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:49.775679  360776 cri.go:89] found id: ""
	I0229 02:17:49.775712  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.775723  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:49.775732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:49.775795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:49.821348  360776 cri.go:89] found id: ""
	I0229 02:17:49.821378  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.821387  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:49.821393  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:49.821459  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:49.864148  360776 cri.go:89] found id: ""
	I0229 02:17:49.864173  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.864182  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:49.864188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:49.864281  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:49.904720  360776 cri.go:89] found id: ""
	I0229 02:17:49.904747  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.904756  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:49.904768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:49.904835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:49.941952  360776 cri.go:89] found id: ""
	I0229 02:17:49.941976  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.941985  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:49.941992  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:49.942050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:49.987518  360776 cri.go:89] found id: ""
	I0229 02:17:49.987549  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.987559  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:49.987566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:49.987642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:50.030662  360776 cri.go:89] found id: ""
	I0229 02:17:50.030691  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.030700  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:50.030708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:50.030768  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:50.075564  360776 cri.go:89] found id: ""
	I0229 02:17:50.075594  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.075605  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:50.075617  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:50.075634  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:50.144223  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:50.144261  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:50.190615  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:50.190649  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:50.209014  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:50.209041  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:50.291096  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:50.291121  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:50.291135  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:48.213151  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:48.713484  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.212735  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.713172  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:50.213286  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:50.712875  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:51.213491  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:51.713354  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:52.212811  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:52.712670  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.779580  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:51.771065  360217 pod_ready.go:81] duration metric: took 4m0.00037351s waiting for pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace to be "Ready" ...
	E0229 02:17:51.771121  360217 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:17:51.771147  360217 pod_ready.go:38] duration metric: took 4m14.54716064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:51.771185  360217 kubeadm.go:640] restartCluster took 4m31.62028036s
	W0229 02:17:51.771272  360217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:17:51.771309  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:17:50.307305  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:52.309458  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:54.310095  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:52.827936  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:52.844926  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:52.845027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:52.892302  360776 cri.go:89] found id: ""
	I0229 02:17:52.892336  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.892349  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:52.892357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:52.892417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:52.943564  360776 cri.go:89] found id: ""
	I0229 02:17:52.943597  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.943607  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:52.943615  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:52.943683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:52.990217  360776 cri.go:89] found id: ""
	I0229 02:17:52.990251  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.990269  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:52.990278  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:52.990347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:53.038508  360776 cri.go:89] found id: ""
	I0229 02:17:53.038542  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.038554  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:53.038562  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:53.038622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:53.082156  360776 cri.go:89] found id: ""
	I0229 02:17:53.082184  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.082197  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:53.082205  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:53.082287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:53.149247  360776 cri.go:89] found id: ""
	I0229 02:17:53.149284  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.149295  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:53.149304  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:53.149371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:53.201169  360776 cri.go:89] found id: ""
	I0229 02:17:53.201199  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.201211  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:53.201219  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:53.201286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:53.268458  360776 cri.go:89] found id: ""
	I0229 02:17:53.268493  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.268507  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:53.268521  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:53.268546  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:53.288661  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:53.288708  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:53.371251  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:53.371277  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:53.371295  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:53.415981  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:53.416033  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:53.464558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:53.464600  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.030905  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:56.046625  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:56.046709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:56.090035  360776 cri.go:89] found id: ""
	I0229 02:17:56.090066  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.090094  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:56.090103  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:56.090176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:56.158245  360776 cri.go:89] found id: ""
	I0229 02:17:56.158276  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.158289  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:56.158297  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:56.158378  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:56.203917  360776 cri.go:89] found id: ""
	I0229 02:17:56.203947  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.203959  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:56.203967  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:56.204037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:56.267950  360776 cri.go:89] found id: ""
	I0229 02:17:56.267978  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.267995  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:56.268003  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:56.268065  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:56.312936  360776 cri.go:89] found id: ""
	I0229 02:17:56.312967  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.312979  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:56.312987  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:56.313050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:56.357548  360776 cri.go:89] found id: ""
	I0229 02:17:56.357584  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.357596  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:56.357605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:56.357674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:56.401842  360776 cri.go:89] found id: ""
	I0229 02:17:56.401876  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.401890  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:56.401898  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:56.401965  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:56.448506  360776 cri.go:89] found id: ""
	I0229 02:17:56.448538  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.448549  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:56.448562  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:56.448578  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.498783  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:56.498821  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:56.516722  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:56.516768  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:56.601770  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:56.601797  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:56.601815  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:56.642969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:56.643010  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:53.212697  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:53.712843  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:54.212762  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:54.713449  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:55.213612  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:55.712707  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:56.213635  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:56.713158  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.213615  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.713426  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.378120  360217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.606758107s)
	I0229 02:17:57.378252  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:57.396898  360217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:17:57.409107  360217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:17:57.420877  360217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:17:57.420927  360217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:17:57.486066  360217 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:17:57.486157  360217 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:17:57.660083  360217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:17:57.660277  360217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:17:57.660395  360217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:17:57.916360  360217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:17:58.213116  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:58.349580  360079 kubeadm.go:1088] duration metric: took 11.984450803s to wait for elevateKubeSystemPrivileges.
	I0229 02:17:58.349651  360079 kubeadm.go:406] StartCluster complete in 4m58.053023709s
	I0229 02:17:58.349775  360079 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:17:58.349948  360079 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:17:58.351856  360079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:17:58.352191  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:17:58.352353  360079 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:17:58.352434  360079 addons.go:69] Setting storage-provisioner=true in profile "no-preload-907398"
	I0229 02:17:58.352462  360079 addons.go:234] Setting addon storage-provisioner=true in "no-preload-907398"
	W0229 02:17:58.352474  360079 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:17:58.352492  360079 config.go:182] Loaded profile config "no-preload-907398": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 02:17:58.352546  360079 addons.go:69] Setting default-storageclass=true in profile "no-preload-907398"
	I0229 02:17:58.352600  360079 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-907398"
	I0229 02:17:58.352615  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353032  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353043  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353052  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353068  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353120  360079 addons.go:69] Setting metrics-server=true in profile "no-preload-907398"
	I0229 02:17:58.353134  360079 addons.go:234] Setting addon metrics-server=true in "no-preload-907398"
	W0229 02:17:58.353141  360079 addons.go:243] addon metrics-server should already be in state true
	I0229 02:17:58.353182  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353351  360079 addons.go:69] Setting dashboard=true in profile "no-preload-907398"
	I0229 02:17:58.353372  360079 addons.go:234] Setting addon dashboard=true in "no-preload-907398"
	W0229 02:17:58.353379  360079 addons.go:243] addon dashboard should already be in state true
	I0229 02:17:58.353416  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353501  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353521  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353780  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353802  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.374370  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0229 02:17:58.374457  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0229 02:17:58.374503  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0229 02:17:58.374564  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34767
	I0229 02:17:58.375443  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375468  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375533  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375559  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375998  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376013  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376104  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376118  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376153  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376166  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376242  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376255  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376604  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.376608  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.376642  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.377147  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377181  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.377256  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377274  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.377339  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.377532  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.377723  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377754  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.380332  360079 addons.go:234] Setting addon default-storageclass=true in "no-preload-907398"
	W0229 02:17:58.380348  360079 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:17:58.380373  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.380607  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.380620  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.399601  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0229 02:17:58.400286  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.400514  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I0229 02:17:58.401167  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.401184  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.401173  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.401760  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.402030  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.402970  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	W0229 02:17:58.403287  360079 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "no-preload-907398" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 02:17:58.403312  360079 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 02:17:58.403338  360079 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:17:58.405226  360079 out.go:177] * Verifying Kubernetes components...
	I0229 02:17:58.403538  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.403723  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.404198  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.406627  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.406718  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:58.412539  360079 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:17:58.407373  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.407398  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.414311  360079 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:17:58.414334  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:17:58.414352  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.412590  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.412844  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.413706  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0229 02:17:58.415059  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.415498  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.417082  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.417438  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.418583  360079 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:17:58.419735  360079 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:17:58.420843  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:17:58.420858  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:17:58.420876  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.418780  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.420946  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.420968  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.418281  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.422030  360079 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:17:57.917746  360217 out.go:204]   - Generating certificates and keys ...
	I0229 02:17:57.917859  360217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:17:57.917965  360217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:17:57.918411  360217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:17:57.918918  360217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:17:57.919445  360217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:17:57.919873  360217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:17:57.920396  360217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:17:57.920807  360217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:17:57.921322  360217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:17:57.921710  360217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:17:57.922094  360217 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:17:57.922176  360217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:17:58.103086  360217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:17:58.146435  360217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:17:58.422571  360217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:17:58.544422  360217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:17:58.545127  360217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:17:58.547666  360217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:17:58.549247  360217 out.go:204]   - Booting up control plane ...
	I0229 02:17:58.549352  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:17:58.549459  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:17:58.550242  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:17:58.577890  360217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:17:58.579022  360217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:17:58.579096  360217 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:17:58.733877  360217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:17:56.311800  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:58.809250  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:58.419456  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.421615  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.423246  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.423335  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:17:58.423343  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:17:58.423357  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.424461  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.424633  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.424741  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.424781  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.425249  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.425315  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.425145  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.425622  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.425732  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.425865  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.426305  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.430169  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.430190  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.430213  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.430221  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.430491  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.430917  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.430946  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.431346  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.431541  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.448561  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0229 02:17:58.449216  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.449840  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.449868  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.450301  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.450574  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.452414  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.452680  360079 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:17:58.452696  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:17:58.452714  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.455680  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.456155  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.456179  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.456414  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.456600  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.456726  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.457041  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.560024  360079 node_ready.go:35] waiting up to 6m0s for node "no-preload-907398" to be "Ready" ...
	I0229 02:17:58.560149  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:17:58.562721  360079 node_ready.go:49] node "no-preload-907398" has status "Ready":"True"
	I0229 02:17:58.562749  360079 node_ready.go:38] duration metric: took 2.693389ms waiting for node "no-preload-907398" to be "Ready" ...
	I0229 02:17:58.562767  360079 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:58.568960  360079 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.583361  360079 pod_ready.go:92] pod "etcd-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.583392  360079 pod_ready.go:81] duration metric: took 14.411119ms waiting for pod "etcd-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.583408  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.612395  360079 pod_ready.go:92] pod "kube-apiserver-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.612430  360079 pod_ready.go:81] duration metric: took 29.012395ms waiting for pod "kube-apiserver-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.612444  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.624710  360079 pod_ready.go:92] pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.624742  360079 pod_ready.go:81] duration metric: took 12.287509ms waiting for pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.624755  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.635770  360079 pod_ready.go:92] pod "kube-scheduler-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.635801  360079 pod_ready.go:81] duration metric: took 11.037539ms waiting for pod "kube-scheduler-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.635813  360079 pod_ready.go:38] duration metric: took 73.031722ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:58.635837  360079 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:17:58.635901  360079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:58.706760  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:17:58.712477  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:17:58.747607  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:17:58.747647  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:17:58.782941  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:17:58.782966  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:17:58.861056  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:17:58.861086  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:17:58.914123  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:17:58.914153  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:17:58.977830  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:17:58.977864  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:17:59.075704  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:17:59.075734  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:17:59.087287  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:17:59.087318  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:17:59.208828  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:17:59.208860  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:17:59.244139  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:17:59.335848  360079 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:17:59.335882  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:17:59.335906  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:17:59.335928  360079 api_server.go:72] duration metric: took 932.545738ms to wait for apiserver process to appear ...
	I0229 02:17:59.335948  360079 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:17:59.335972  360079 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0229 02:17:59.385781  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:17:59.385818  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:17:59.446518  360079 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0229 02:17:59.448251  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:17:59.448278  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:17:59.480111  360079 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:17:59.480149  360079 api_server.go:131] duration metric: took 144.191444ms to wait for apiserver health ...
	I0229 02:17:59.480161  360079 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:17:59.524432  360079 system_pods.go:59] 7 kube-system pods found
	I0229 02:17:59.524474  360079 system_pods.go:61] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending
	I0229 02:17:59.524481  360079 system_pods.go:61] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending
	I0229 02:17:59.524486  360079 system_pods.go:61] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.524492  360079 system_pods.go:61] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.524499  360079 system_pods.go:61] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.524508  360079 system_pods.go:61] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.524514  360079 system_pods.go:61] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.524526  360079 system_pods.go:74] duration metric: took 44.35791ms to wait for pod list to return data ...
	I0229 02:17:59.524539  360079 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:17:59.556701  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:17:59.556744  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:17:59.586815  360079 default_sa.go:45] found service account: "default"
	I0229 02:17:59.586867  360079 default_sa.go:55] duration metric: took 62.31539ms for default service account to be created ...
	I0229 02:17:59.586883  360079 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:17:59.613376  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:17:59.661179  360079 system_pods.go:86] 7 kube-system pods found
	I0229 02:17:59.661281  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending
	I0229 02:17:59.661305  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.661322  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.661342  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.661358  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.661376  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.661392  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.661424  360079 retry.go:31] will retry after 225.195811ms: missing components: kube-dns, kube-proxy
	I0229 02:17:59.900439  360079 system_pods.go:86] 7 kube-system pods found
	I0229 02:17:59.900490  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.900539  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.900555  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.900563  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.900576  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.900587  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.900597  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.900620  360079 retry.go:31] will retry after 348.416029ms: missing components: kube-dns, kube-proxy
	I0229 02:18:00.221814  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.509290892s)
	I0229 02:18:00.221894  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.221910  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.221939  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.515133599s)
	I0229 02:18:00.221984  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.221998  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.222483  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.222513  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.222695  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.222753  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.222784  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.222801  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.223074  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.223113  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.224083  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.224104  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.224115  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.224123  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.224355  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.224402  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.224415  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.254073  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.254130  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.256526  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.256546  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.256576  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.281620  360079 system_pods.go:86] 8 kube-system pods found
	I0229 02:18:00.281652  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.281658  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.281664  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:00.281671  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:00.281676  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:00.281681  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:00.281685  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:00.281695  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:00.281717  360079 retry.go:31] will retry after 374.602979ms: missing components: kube-dns, kube-proxy
	I0229 02:18:00.701978  360079 system_pods.go:86] 8 kube-system pods found
	I0229 02:18:00.702028  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.702039  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.702048  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:00.702059  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:00.702066  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:00.702075  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:00.702094  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:00.702107  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:00.702131  360079 retry.go:31] will retry after 563.29938ms: missing components: kube-dns, kube-proxy
	I0229 02:18:01.275888  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.031696303s)
	I0229 02:18:01.275958  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:01.275973  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:01.276375  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:01.276422  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:01.276435  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:01.276448  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:01.276473  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:01.276898  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:01.276957  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:01.277012  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:01.277032  360079 addons.go:470] Verifying addon metrics-server=true in "no-preload-907398"
	I0229 02:18:01.286612  360079 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:01.286655  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:01.286668  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:01.286676  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:01.286686  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:01.286697  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:01.286706  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:01.286716  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:01.286726  360079 system_pods.go:89] "metrics-server-57f55c9bc5-hln75" [8bfb6800-10c6-4154-8311-e568c1e146d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:01.286745  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:01.286772  360079 retry.go:31] will retry after 523.32187ms: missing components: kube-dns, kube-proxy
	I0229 02:18:01.829847  360079 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:01.829894  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Running
	I0229 02:18:01.829905  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Running
	I0229 02:18:01.829912  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:01.829924  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:01.829932  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:01.829938  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Running
	I0229 02:18:01.829944  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:01.829957  360079 system_pods.go:89] "metrics-server-57f55c9bc5-hln75" [8bfb6800-10c6-4154-8311-e568c1e146d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:01.829967  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:01.829989  360079 system_pods.go:126] duration metric: took 2.243096892s to wait for k8s-apps to be running ...
	I0229 02:18:01.830005  360079 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:18:01.830091  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:02.189987  360079 system_svc.go:56] duration metric: took 359.972364ms WaitForService to wait for kubelet.
	I0229 02:18:02.190024  360079 kubeadm.go:581] duration metric: took 3.786642999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:18:02.190050  360079 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:18:02.190227  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.576785344s)
	I0229 02:18:02.190281  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:02.190299  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:02.190727  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:02.190798  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:02.190810  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:02.190819  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:02.190827  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:02.193012  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:02.193025  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:02.193062  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:02.194791  360079 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-907398 addons enable metrics-server
	
	I0229 02:18:02.196317  360079 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0229 02:18:02.197863  360079 addons.go:505] enable addons completed in 3.84551804s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0229 02:18:02.210831  360079 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:18:02.210859  360079 node_conditions.go:123] node cpu capacity is 2
	I0229 02:18:02.210871  360079 node_conditions.go:105] duration metric: took 20.81411ms to run NodePressure ...
	I0229 02:18:02.210885  360079 start.go:228] waiting for startup goroutines ...
	I0229 02:18:02.210894  360079 start.go:233] waiting for cluster config update ...
	I0229 02:18:02.210911  360079 start.go:242] writing updated cluster config ...
	I0229 02:18:02.211195  360079 ssh_runner.go:195] Run: rm -f paused
	I0229 02:18:02.271875  360079 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:18:02.273687  360079 out.go:177] * Done! kubectl is now configured to use "no-preload-907398" cluster and "default" namespace by default
	I0229 02:17:59.194448  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:59.212378  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:59.212455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:59.272835  360776 cri.go:89] found id: ""
	I0229 02:17:59.272864  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.272873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:59.272879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:59.272945  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:59.326044  360776 cri.go:89] found id: ""
	I0229 02:17:59.326097  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.326110  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:59.326119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:59.326195  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:59.375112  360776 cri.go:89] found id: ""
	I0229 02:17:59.375147  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.375158  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:59.375165  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:59.375231  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:59.423465  360776 cri.go:89] found id: ""
	I0229 02:17:59.423489  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.423498  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:59.423504  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:59.423564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:59.464386  360776 cri.go:89] found id: ""
	I0229 02:17:59.464416  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.464427  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:59.464433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:59.464493  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:59.507714  360776 cri.go:89] found id: ""
	I0229 02:17:59.507746  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.507759  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:59.507768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:59.507836  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:59.563729  360776 cri.go:89] found id: ""
	I0229 02:17:59.563761  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.563773  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:59.563781  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:59.563869  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:59.623366  360776 cri.go:89] found id: ""
	I0229 02:17:59.623392  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.623404  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:59.623417  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:59.623432  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:59.700723  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:59.700783  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:59.722858  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:59.722904  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:59.830864  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:59.830892  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:59.830908  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:59.881944  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:59.881996  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:00.814212  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:03.310396  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:05.240170  360217 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.506059 seconds
	I0229 02:18:05.240365  360217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:18:05.258467  360217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:18:05.790274  360217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:18:05.790547  360217 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-254367 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:18:06.306317  360217 kubeadm.go:322] [bootstrap-token] Using token: up9wo1.za7nj6xpc5l7gy5b
	I0229 02:18:06.308235  360217 out.go:204]   - Configuring RBAC rules ...
	I0229 02:18:06.308376  360217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:18:06.317348  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:18:06.328386  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:18:06.333738  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:18:06.338257  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:18:06.342124  360217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:18:06.357763  360217 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:18:06.667301  360217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:18:06.893898  360217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:18:06.900021  360217 kubeadm.go:322] 
	I0229 02:18:06.900123  360217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:18:06.900136  360217 kubeadm.go:322] 
	I0229 02:18:06.900244  360217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:18:06.900251  360217 kubeadm.go:322] 
	I0229 02:18:06.900282  360217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:18:06.900361  360217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:18:06.900422  360217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:18:06.900428  360217 kubeadm.go:322] 
	I0229 02:18:06.900491  360217 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:18:06.900505  360217 kubeadm.go:322] 
	I0229 02:18:06.900564  360217 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:18:06.900570  360217 kubeadm.go:322] 
	I0229 02:18:06.900633  360217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:18:06.900725  360217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:18:06.900814  360217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:18:06.900832  360217 kubeadm.go:322] 
	I0229 02:18:06.900935  360217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:18:06.901029  360217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:18:06.901038  360217 kubeadm.go:322] 
	I0229 02:18:06.901139  360217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token up9wo1.za7nj6xpc5l7gy5b \
	I0229 02:18:06.901267  360217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:18:06.901296  360217 kubeadm.go:322] 	--control-plane 
	I0229 02:18:06.901302  360217 kubeadm.go:322] 
	I0229 02:18:06.901439  360217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:18:06.901447  360217 kubeadm.go:322] 
	I0229 02:18:06.901554  360217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token up9wo1.za7nj6xpc5l7gy5b \
	I0229 02:18:06.901681  360217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:18:06.904775  360217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:18:06.904839  360217 cni.go:84] Creating CNI manager for ""
	I0229 02:18:06.904862  360217 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:18:06.906658  360217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:18:02.462408  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:02.485957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:02.486017  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:02.540769  360776 cri.go:89] found id: ""
	I0229 02:18:02.540803  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.540814  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:02.540834  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:02.540902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:02.584488  360776 cri.go:89] found id: ""
	I0229 02:18:02.584514  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.584525  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:02.584532  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:02.584601  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:02.644908  360776 cri.go:89] found id: ""
	I0229 02:18:02.644943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.644956  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:02.644963  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:02.645031  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:02.702464  360776 cri.go:89] found id: ""
	I0229 02:18:02.702498  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.702510  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:02.702519  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:02.702587  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:02.754980  360776 cri.go:89] found id: ""
	I0229 02:18:02.755008  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.755020  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:02.755029  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:02.755101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:02.807863  360776 cri.go:89] found id: ""
	I0229 02:18:02.807890  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.807901  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:02.807908  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:02.807964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:02.850910  360776 cri.go:89] found id: ""
	I0229 02:18:02.850943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.850956  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:02.850964  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:02.851034  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:02.895792  360776 cri.go:89] found id: ""
	I0229 02:18:02.895832  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.895844  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:02.895857  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:02.895874  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:02.951353  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:02.951399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:02.970262  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:02.970303  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:03.055141  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:03.055165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:03.055182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:03.091751  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:03.091791  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:05.646070  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:05.663225  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:05.663301  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:05.712565  360776 cri.go:89] found id: ""
	I0229 02:18:05.712604  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.712623  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:05.712632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:05.712697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:05.761656  360776 cri.go:89] found id: ""
	I0229 02:18:05.761685  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.761699  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:05.761715  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:05.761780  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:05.805264  360776 cri.go:89] found id: ""
	I0229 02:18:05.805299  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.805310  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:05.805318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:05.805382  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:05.853483  360776 cri.go:89] found id: ""
	I0229 02:18:05.853555  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.853569  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:05.853578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:05.853653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:05.894561  360776 cri.go:89] found id: ""
	I0229 02:18:05.894589  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.894608  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:05.894616  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:05.894680  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:05.937784  360776 cri.go:89] found id: ""
	I0229 02:18:05.937816  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.937825  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:05.937832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:05.937900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:05.982000  360776 cri.go:89] found id: ""
	I0229 02:18:05.982028  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.982039  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:05.982046  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:05.982136  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:06.025395  360776 cri.go:89] found id: ""
	I0229 02:18:06.025430  360776 logs.go:276] 0 containers: []
	W0229 02:18:06.025443  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:06.025455  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:06.025470  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:06.078175  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:06.078221  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:06.106042  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:06.106097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:06.233485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:06.233506  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:06.233522  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:06.273517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:06.273557  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:06.908321  360217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:18:06.928907  360217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:18:06.976992  360217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:18:06.977068  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:06.977074  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-254367 minikube.k8s.io/updated_at=2024_02_29T02_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:07.053045  360217 ops.go:34] apiserver oom_adj: -16
	I0229 02:18:07.339410  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:07.840356  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:08.340151  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:08.840168  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:05.809727  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:08.311572  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:08.827599  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:08.845166  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:08.845270  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:08.891258  360776 cri.go:89] found id: ""
	I0229 02:18:08.891291  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.891303  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:08.891311  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:08.891381  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:08.936833  360776 cri.go:89] found id: ""
	I0229 02:18:08.936868  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.936879  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:08.936888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:08.936962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:08.979759  360776 cri.go:89] found id: ""
	I0229 02:18:08.979788  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.979800  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:08.979812  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:08.979878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:09.023686  360776 cri.go:89] found id: ""
	I0229 02:18:09.023722  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.023734  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:09.023744  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:09.023817  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:09.068374  360776 cri.go:89] found id: ""
	I0229 02:18:09.068413  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.068426  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:09.068434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:09.068502  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:09.147948  360776 cri.go:89] found id: ""
	I0229 02:18:09.147976  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.147985  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:09.147991  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:09.148043  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:09.202491  360776 cri.go:89] found id: ""
	I0229 02:18:09.202522  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.202534  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:09.202542  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:09.202605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:09.248957  360776 cri.go:89] found id: ""
	I0229 02:18:09.248992  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.249005  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:09.249018  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:09.249038  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:09.318433  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:09.318476  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:09.335205  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:09.335240  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:09.417917  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:09.417952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:09.417969  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:09.464739  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:09.464779  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:12.017825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:12.033452  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:12.033518  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:12.082587  360776 cri.go:89] found id: ""
	I0229 02:18:12.082621  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.082634  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:12.082642  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:12.082714  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:12.132662  360776 cri.go:89] found id: ""
	I0229 02:18:12.132696  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.132717  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:12.132725  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:12.132795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:12.204316  360776 cri.go:89] found id: ""
	I0229 02:18:12.204343  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.204351  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:12.204357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:12.204417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:12.255146  360776 cri.go:89] found id: ""
	I0229 02:18:12.255178  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.255190  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:12.255198  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:12.255265  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:12.299280  360776 cri.go:89] found id: ""
	I0229 02:18:12.299314  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.299328  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:12.299337  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:12.299410  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:12.340621  360776 cri.go:89] found id: ""
	I0229 02:18:12.340646  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.340658  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:12.340667  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:12.340722  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:09.339996  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:09.839471  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.340401  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.839457  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:11.340046  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:11.839746  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:12.339889  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:12.839469  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:13.339676  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:13.840012  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.809010  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:13.307420  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:12.391888  360776 cri.go:89] found id: ""
	I0229 02:18:12.391926  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.391938  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:12.391945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:12.392010  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:12.440219  360776 cri.go:89] found id: ""
	I0229 02:18:12.440250  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.440263  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:12.440276  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:12.440290  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:12.495586  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:12.495621  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:12.513608  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:12.513653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:12.587894  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:12.587929  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:12.587956  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:12.625496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:12.625533  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:15.187090  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:15.206990  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:15.207074  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:15.261493  360776 cri.go:89] found id: ""
	I0229 02:18:15.261522  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.261535  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:15.261543  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:15.261620  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:15.302408  360776 cri.go:89] found id: ""
	I0229 02:18:15.302437  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.302449  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:15.302457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:15.302524  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:15.340553  360776 cri.go:89] found id: ""
	I0229 02:18:15.340580  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.340590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:15.340598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:15.340661  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:15.383659  360776 cri.go:89] found id: ""
	I0229 02:18:15.383688  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.383699  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:15.383708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:15.383777  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:15.433164  360776 cri.go:89] found id: ""
	I0229 02:18:15.433200  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.433212  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:15.433220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:15.433293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:15.479950  360776 cri.go:89] found id: ""
	I0229 02:18:15.479993  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.480006  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:15.480014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:15.480078  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:15.519601  360776 cri.go:89] found id: ""
	I0229 02:18:15.519628  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.519637  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:15.519644  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:15.519707  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:15.564564  360776 cri.go:89] found id: ""
	I0229 02:18:15.564598  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.564610  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:15.564624  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:15.564643  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:15.615855  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:15.615894  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:15.632464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:15.632505  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:15.713177  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:15.713198  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:15.713214  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:15.749296  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:15.749326  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:14.340255  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:14.839541  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:15.339620  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:15.840469  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:16.339540  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:16.840203  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:17.339841  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:17.839673  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:18.339956  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:18.839965  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:19.023067  360217 kubeadm.go:1088] duration metric: took 12.046075339s to wait for elevateKubeSystemPrivileges.
	I0229 02:18:19.023110  360217 kubeadm.go:406] StartCluster complete in 4m58.952060994s
	I0229 02:18:19.023136  360217 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:19.023240  360217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:18:19.025049  360217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:19.027123  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:18:19.027409  360217 config.go:182] Loaded profile config "default-k8s-diff-port-254367": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:18:19.027464  360217 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:18:19.027538  360217 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.027561  360217 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.027576  360217 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:18:19.027588  360217 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.027620  360217 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-254367"
	I0229 02:18:19.027628  360217 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-254367"
	W0229 02:18:19.027633  360217 addons.go:243] addon dashboard should already be in state true
	I0229 02:18:19.027642  360217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-254367"
	I0229 02:18:19.027681  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028079  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028088  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028108  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.028114  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.027623  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028343  360217 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.028368  360217 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.028377  360217 addons.go:243] addon metrics-server should already be in state true
	I0229 02:18:19.028499  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028537  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.028563  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028931  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028959  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.047714  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0229 02:18:19.048288  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.048404  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0229 02:18:19.048502  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0229 02:18:19.048785  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.048915  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.049087  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049106  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049417  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049443  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049468  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.049605  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049623  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049632  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.049830  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.049990  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.050491  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.050525  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.050742  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.050780  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.052986  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0229 02:18:19.056042  360217 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.056065  360217 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:18:19.056101  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.056338  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.056649  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.056674  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.057319  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.057403  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.058140  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.059410  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.059437  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.069542  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0229 02:18:19.069932  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.070411  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.070438  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.070747  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.070987  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.072429  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.074634  360217 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:18:19.076733  360217 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:18:19.078676  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:18:19.078702  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:18:19.078723  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.078731  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0229 02:18:19.078949  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I0229 02:18:19.079355  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.079753  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.080120  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.080143  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.080374  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.080389  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.080491  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.080718  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.080832  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.081012  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.082727  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.083018  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.084629  360217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:18:19.083192  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.083785  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.086324  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:18:19.086355  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.087244  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0229 02:18:19.087643  360217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:18:19.088961  360217 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:18:19.088981  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:18:19.089000  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.087691  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:18:19.089061  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.087724  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.087806  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.087943  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.089282  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.089425  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.090396  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.090419  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.090890  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.091717  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.091743  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.092187  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.092654  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.092677  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.092801  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.093024  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.093212  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.093402  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.093539  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.093806  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.093828  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.093851  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.093940  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.094226  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.094421  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	W0229 02:18:19.100332  360217 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-254367" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 02:18:19.100363  360217 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 02:18:19.100388  360217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:18:19.101941  360217 out.go:177] * Verifying Kubernetes components...
	I0229 02:18:19.103689  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:19.114276  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I0229 02:18:19.114684  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.115166  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.115190  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.115557  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:15.308627  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:17.807561  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:19.808357  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:18.299689  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:18.315449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:18.315523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:18.357310  360776 cri.go:89] found id: ""
	I0229 02:18:18.357347  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.357360  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:18.357369  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:18.357427  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:18.410178  360776 cri.go:89] found id: ""
	I0229 02:18:18.410212  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.410224  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:18.410232  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:18.410300  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:18.452273  360776 cri.go:89] found id: ""
	I0229 02:18:18.452303  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.452315  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:18.452330  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:18.452398  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:18.493134  360776 cri.go:89] found id: ""
	I0229 02:18:18.493161  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.493170  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:18.493176  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:18.493247  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:18.530812  360776 cri.go:89] found id: ""
	I0229 02:18:18.530843  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.530855  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:18.530864  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:18.530931  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:18.572183  360776 cri.go:89] found id: ""
	I0229 02:18:18.572216  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.572231  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:18.572240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:18.572314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:18.612117  360776 cri.go:89] found id: ""
	I0229 02:18:18.612148  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.612160  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:18.612169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:18.612230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:18.653827  360776 cri.go:89] found id: ""
	I0229 02:18:18.653855  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.653866  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:18.653879  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:18.653898  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:18.688058  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:18.688094  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:18.735458  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:18.735493  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:18.795735  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:18.795780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:18.816207  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:18.816239  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:18.928414  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:21.429284  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:21.445010  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:21.445084  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:21.484084  360776 cri.go:89] found id: ""
	I0229 02:18:21.484128  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.484141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:21.484159  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:21.484223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:21.536516  360776 cri.go:89] found id: ""
	I0229 02:18:21.536550  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.536563  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:21.536571  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:21.536636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:21.588732  360776 cri.go:89] found id: ""
	I0229 02:18:21.588761  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.588773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:21.588782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:21.588843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:21.644434  360776 cri.go:89] found id: ""
	I0229 02:18:21.644470  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.644483  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:21.644491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:21.644560  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:21.685496  360776 cri.go:89] found id: ""
	I0229 02:18:21.685528  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.685540  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:21.685548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:21.685615  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:21.741146  360776 cri.go:89] found id: ""
	I0229 02:18:21.741176  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.741188  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:21.741196  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:21.741287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:21.790924  360776 cri.go:89] found id: ""
	I0229 02:18:21.790953  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.790964  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:21.790972  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:21.791040  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:21.843079  360776 cri.go:89] found id: ""
	I0229 02:18:21.843107  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.843118  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:21.843131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:21.843155  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:21.917006  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:21.917035  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:21.987268  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:21.987313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:22.009660  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:22.009699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:22.101976  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:22.102000  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:22.102017  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:19.115785  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.118586  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.118869  360217 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:18:19.118886  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:18:19.118905  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.121918  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.122332  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.122364  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.122552  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.122770  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.122996  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.123154  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.269274  360217 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-254367" to be "Ready" ...
	I0229 02:18:19.269550  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:18:19.282334  360217 node_ready.go:49] node "default-k8s-diff-port-254367" has status "Ready":"True"
	I0229 02:18:19.282362  360217 node_ready.go:38] duration metric: took 13.046941ms waiting for node "default-k8s-diff-port-254367" to be "Ready" ...
	I0229 02:18:19.282377  360217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:18:19.298326  360217 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.311217  360217 pod_ready.go:92] pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.311243  360217 pod_ready.go:81] duration metric: took 12.887306ms waiting for pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.311252  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.317185  360217 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.317210  360217 pod_ready.go:81] duration metric: took 5.951807ms waiting for pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.317219  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.330495  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:18:19.330519  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:18:19.331739  360217 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.331775  360217 pod_ready.go:81] duration metric: took 14.548327ms waiting for pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.331791  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dlgmz" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.363610  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:18:19.461745  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:18:19.461779  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:18:19.467030  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:18:19.467234  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:18:19.467253  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:18:19.568507  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:18:19.568540  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:18:19.641306  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:18:19.641346  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:18:19.750251  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:18:19.750282  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:18:19.807358  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:18:19.886145  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:18:19.886169  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:18:20.066662  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:18:20.066699  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:18:20.097965  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:18:20.097990  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:18:20.136049  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:18:20.136075  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:18:20.232757  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:18:20.232780  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:18:20.290653  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:18:20.290679  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:18:20.359549  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:18:21.354053  360217 pod_ready.go:102] pod "kube-proxy-dlgmz" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:21.788753  360217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.519159841s)
	I0229 02:18:21.788798  360217 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0229 02:18:22.362286  360217 pod_ready.go:92] pod "kube-proxy-dlgmz" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:22.362318  360217 pod_ready.go:81] duration metric: took 3.030515197s waiting for pod "kube-proxy-dlgmz" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.362331  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.392397  360217 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:22.392428  360217 pod_ready.go:81] duration metric: took 30.087397ms waiting for pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.392441  360217 pod_ready.go:38] duration metric: took 3.110051734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:18:22.392462  360217 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:18:22.392516  360217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:22.755340  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.288276833s)
	I0229 02:18:22.755387  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755402  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755534  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.948137303s)
	I0229 02:18:22.755568  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755581  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755693  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.392056284s)
	I0229 02:18:22.755714  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755723  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755982  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.756023  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.756037  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.756047  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.756052  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.756327  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.756341  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.756357  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.756366  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.760172  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760183  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760221  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760234  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760250  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760268  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760258  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760298  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760278  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760380  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.760390  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.760627  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760646  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760659  360217 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-254367"
	I0229 02:18:22.788927  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.788955  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.789219  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.789242  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.407247  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.047637799s)
	I0229 02:18:23.407257  360217 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.014711886s)
	I0229 02:18:23.407374  360217 api_server.go:72] duration metric: took 4.306954781s to wait for apiserver process to appear ...
	I0229 02:18:23.407399  360217 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:18:23.407433  360217 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8444/healthz ...
	I0229 02:18:23.407314  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:23.407545  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:23.407931  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:23.407948  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.407959  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:23.407967  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:23.408309  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:23.408318  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:23.408331  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.411220  360217 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-254367 addons enable metrics-server
	
	I0229 02:18:23.412663  360217 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0229 02:18:23.414033  360217 addons.go:505] enable addons completed in 4.386557527s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0229 02:18:23.439279  360217 api_server.go:279] https://192.168.72.88:8444/healthz returned 200:
	ok
	I0229 02:18:23.443380  360217 api_server.go:141] control plane version: v1.28.4
	I0229 02:18:23.443419  360217 api_server.go:131] duration metric: took 36.010336ms to wait for apiserver health ...
	I0229 02:18:23.443434  360217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:18:23.459207  360217 system_pods.go:59] 9 kube-system pods found
	I0229 02:18:23.459239  360217 system_pods.go:61] "coredns-5dd5756b68-vsxcv" [f2cabd39-df55-4e81-85d3-a745eb5533c6] Running
	I0229 02:18:23.459246  360217 system_pods.go:61] "coredns-5dd5756b68-x6qjk" [3a4370e5-86c3-4c8b-b275-70e55da74256] Running
	I0229 02:18:23.459253  360217 system_pods.go:61] "etcd-default-k8s-diff-port-254367" [5f2c758b-5068-4138-b2c1-b4161802f59f] Running
	I0229 02:18:23.459259  360217 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-254367" [bfd63194-f697-48ec-a594-9fb43acd5c1c] Running
	I0229 02:18:23.459265  360217 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-254367" [817f802d-a424-425d-89ae-8cab6c34c18d] Running
	I0229 02:18:23.459271  360217 system_pods.go:61] "kube-proxy-dlgmz" [0d9e6b25-c506-43a6-b1d2-e3906fcf7b92] Running
	I0229 02:18:23.459277  360217 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-254367" [fd8b2ce6-a716-4aa4-b09d-c83b4c9c3b90] Running
	I0229 02:18:23.459288  360217 system_pods.go:61] "metrics-server-57f55c9bc5-2wc8d" [da2ffb04-58a1-476a-8ea2-5e8d33512c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:23.459296  360217 system_pods.go:61] "storage-provisioner" [0e031ad8-0a53-4aa3-9a00-e03078b0db2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:23.459314  360217 system_pods.go:74] duration metric: took 15.86958ms to wait for pod list to return data ...
	I0229 02:18:23.459329  360217 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:18:23.464125  360217 default_sa.go:45] found service account: "default"
	I0229 02:18:23.464196  360217 default_sa.go:55] duration metric: took 4.855817ms for default service account to be created ...
	I0229 02:18:23.464222  360217 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:18:23.471833  360217 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:23.471861  360217 system_pods.go:89] "coredns-5dd5756b68-vsxcv" [f2cabd39-df55-4e81-85d3-a745eb5533c6] Running
	I0229 02:18:23.471869  360217 system_pods.go:89] "coredns-5dd5756b68-x6qjk" [3a4370e5-86c3-4c8b-b275-70e55da74256] Running
	I0229 02:18:23.471876  360217 system_pods.go:89] "etcd-default-k8s-diff-port-254367" [5f2c758b-5068-4138-b2c1-b4161802f59f] Running
	I0229 02:18:23.471883  360217 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-254367" [bfd63194-f697-48ec-a594-9fb43acd5c1c] Running
	I0229 02:18:23.471889  360217 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-254367" [817f802d-a424-425d-89ae-8cab6c34c18d] Running
	I0229 02:18:23.471896  360217 system_pods.go:89] "kube-proxy-dlgmz" [0d9e6b25-c506-43a6-b1d2-e3906fcf7b92] Running
	I0229 02:18:23.471908  360217 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-254367" [fd8b2ce6-a716-4aa4-b09d-c83b4c9c3b90] Running
	I0229 02:18:23.471917  360217 system_pods.go:89] "metrics-server-57f55c9bc5-2wc8d" [da2ffb04-58a1-476a-8ea2-5e8d33512c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:23.471927  360217 system_pods.go:89] "storage-provisioner" [0e031ad8-0a53-4aa3-9a00-e03078b0db2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:23.471943  360217 system_pods.go:126] duration metric: took 7.704603ms to wait for k8s-apps to be running ...
	I0229 02:18:23.471955  360217 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:18:23.472051  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:23.495777  360217 system_svc.go:56] duration metric: took 23.811126ms WaitForService to wait for kubelet.
	I0229 02:18:23.495810  360217 kubeadm.go:581] duration metric: took 4.395396941s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:18:23.495838  360217 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:18:23.502935  360217 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:18:23.502962  360217 node_conditions.go:123] node cpu capacity is 2
	I0229 02:18:23.502975  360217 node_conditions.go:105] duration metric: took 7.130297ms to run NodePressure ...
	I0229 02:18:23.502991  360217 start.go:228] waiting for startup goroutines ...
	I0229 02:18:23.503004  360217 start.go:233] waiting for cluster config update ...
	I0229 02:18:23.503019  360217 start.go:242] writing updated cluster config ...
	I0229 02:18:23.503329  360217 ssh_runner.go:195] Run: rm -f paused
	I0229 02:18:23.565856  360217 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:18:23.567626  360217 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-254367" cluster and "default" namespace by default
	I0229 02:18:21.812768  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:24.310049  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:24.648787  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:24.663511  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:24.663574  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:24.702299  360776 cri.go:89] found id: ""
	I0229 02:18:24.702329  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.702342  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:24.702349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:24.702414  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:24.741664  360776 cri.go:89] found id: ""
	I0229 02:18:24.741696  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.741708  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:24.741720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:24.741782  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:24.809755  360776 cri.go:89] found id: ""
	I0229 02:18:24.809788  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.809799  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:24.809807  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:24.809867  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:24.850308  360776 cri.go:89] found id: ""
	I0229 02:18:24.850335  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.850344  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:24.850351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:24.850408  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:24.903507  360776 cri.go:89] found id: ""
	I0229 02:18:24.903539  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.903551  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:24.903559  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:24.903624  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:24.952996  360776 cri.go:89] found id: ""
	I0229 02:18:24.953026  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.953039  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:24.953048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:24.953119  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:24.999301  360776 cri.go:89] found id: ""
	I0229 02:18:24.999334  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.999347  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:24.999355  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:24.999418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:25.044310  360776 cri.go:89] found id: ""
	I0229 02:18:25.044350  360776 logs.go:276] 0 containers: []
	W0229 02:18:25.044362  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:25.044375  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:25.044391  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:25.091374  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:25.091407  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:25.109080  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:25.109118  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:25.186611  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:25.186639  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:25.186663  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:25.226779  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:25.226825  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:26.320759  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:28.807091  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:27.775896  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:27.789596  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:27.789662  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:27.834159  360776 cri.go:89] found id: ""
	I0229 02:18:27.834186  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.834198  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:27.834207  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:27.834278  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:27.887355  360776 cri.go:89] found id: ""
	I0229 02:18:27.887386  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.887398  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:27.887407  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:27.887481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:27.927671  360776 cri.go:89] found id: ""
	I0229 02:18:27.927710  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.927724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:27.927740  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:27.927819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:27.983438  360776 cri.go:89] found id: ""
	I0229 02:18:27.983471  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.983484  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:27.983493  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:27.983562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:28.026112  360776 cri.go:89] found id: ""
	I0229 02:18:28.026143  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.026156  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:28.026238  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:28.026310  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:28.069085  360776 cri.go:89] found id: ""
	I0229 02:18:28.069118  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.069130  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:28.069138  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:28.069285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:28.115010  360776 cri.go:89] found id: ""
	I0229 02:18:28.115037  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.115046  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:28.115051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:28.115113  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:28.157726  360776 cri.go:89] found id: ""
	I0229 02:18:28.157756  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.157769  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:28.157783  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:28.157800  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:28.218148  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:28.218196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:28.238106  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:28.238142  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:28.328947  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:28.328971  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:28.328988  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:28.364795  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:28.364831  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:30.914422  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:30.929248  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:30.929334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:30.983535  360776 cri.go:89] found id: ""
	I0229 02:18:30.983566  360776 logs.go:276] 0 containers: []
	W0229 02:18:30.983577  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:30.983585  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:30.983644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:31.037809  360776 cri.go:89] found id: ""
	I0229 02:18:31.037842  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.037853  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:31.037862  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:31.037933  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:31.089101  360776 cri.go:89] found id: ""
	I0229 02:18:31.089134  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.089146  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:31.089154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:31.089219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:31.139413  360776 cri.go:89] found id: ""
	I0229 02:18:31.139444  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.139456  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:31.139463  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:31.139542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:31.177185  360776 cri.go:89] found id: ""
	I0229 02:18:31.177214  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.177223  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:31.177229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:31.177295  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:31.221339  360776 cri.go:89] found id: ""
	I0229 02:18:31.221374  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.221387  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:31.221395  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:31.221461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:31.261770  360776 cri.go:89] found id: ""
	I0229 02:18:31.261803  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.261815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:31.261824  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:31.261895  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:31.309126  360776 cri.go:89] found id: ""
	I0229 02:18:31.309157  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.309168  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:31.309179  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:31.309193  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:31.362509  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:31.362552  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:31.379334  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:31.379383  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:31.471339  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:31.471359  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:31.471372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:31.511126  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:31.511172  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:30.808454  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:33.308106  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:34.063372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:34.077222  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:34.077297  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:34.116752  360776 cri.go:89] found id: ""
	I0229 02:18:34.116793  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.116806  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:34.116815  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:34.116880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:34.157658  360776 cri.go:89] found id: ""
	I0229 02:18:34.157689  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.157700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:34.157708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:34.157779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:34.199922  360776 cri.go:89] found id: ""
	I0229 02:18:34.199957  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.199969  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:34.199977  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:34.200044  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:34.242474  360776 cri.go:89] found id: ""
	I0229 02:18:34.242505  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.242517  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:34.242526  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:34.242585  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:34.289308  360776 cri.go:89] found id: ""
	I0229 02:18:34.289338  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.289360  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:34.289367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:34.289431  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:34.335947  360776 cri.go:89] found id: ""
	I0229 02:18:34.335985  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.335997  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:34.336005  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:34.336073  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:34.377048  360776 cri.go:89] found id: ""
	I0229 02:18:34.377085  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.377097  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:34.377107  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:34.377181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:34.424208  360776 cri.go:89] found id: ""
	I0229 02:18:34.424238  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.424250  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:34.424270  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:34.424288  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:34.500223  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:34.500245  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:34.500263  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:34.534652  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:34.534688  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:34.593369  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:34.593405  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:34.646940  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:34.646982  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.169523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:37.184168  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:37.184245  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:37.232979  360776 cri.go:89] found id: ""
	I0229 02:18:37.233015  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.233026  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:37.233037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:37.233110  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:37.275771  360776 cri.go:89] found id: ""
	I0229 02:18:37.275796  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.275805  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:37.275811  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:37.275877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:37.322421  360776 cri.go:89] found id: ""
	I0229 02:18:37.322451  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.322460  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:37.322466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:37.322525  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:35.807858  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:38.307264  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:37.366974  360776 cri.go:89] found id: ""
	I0229 02:18:37.367001  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.367011  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:37.367020  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:37.367080  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:37.408780  360776 cri.go:89] found id: ""
	I0229 02:18:37.408811  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.408822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:37.408828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:37.408880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:37.447402  360776 cri.go:89] found id: ""
	I0229 02:18:37.447429  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.447441  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:37.447449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:37.447511  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:37.486454  360776 cri.go:89] found id: ""
	I0229 02:18:37.486491  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.486502  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:37.486510  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:37.486579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:37.531484  360776 cri.go:89] found id: ""
	I0229 02:18:37.531517  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.531533  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:37.531545  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:37.531562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:37.581274  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:37.581312  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.601745  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:37.601777  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:37.707773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:37.707801  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:37.707818  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:37.740658  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:37.740698  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.296427  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:40.311365  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:40.311439  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:40.354647  360776 cri.go:89] found id: ""
	I0229 02:18:40.354675  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.354693  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:40.354701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:40.354769  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:40.400490  360776 cri.go:89] found id: ""
	I0229 02:18:40.400520  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.400529  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:40.400535  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:40.400602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:40.442029  360776 cri.go:89] found id: ""
	I0229 02:18:40.442051  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.442060  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:40.442065  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:40.442169  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:40.481183  360776 cri.go:89] found id: ""
	I0229 02:18:40.481216  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.481228  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:40.481237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:40.481316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:40.523076  360776 cri.go:89] found id: ""
	I0229 02:18:40.523104  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.523113  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:40.523118  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:40.523209  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:40.561787  360776 cri.go:89] found id: ""
	I0229 02:18:40.561817  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.561826  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:40.561832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:40.561908  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:40.598621  360776 cri.go:89] found id: ""
	I0229 02:18:40.598647  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.598655  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:40.598662  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:40.598710  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:40.637701  360776 cri.go:89] found id: ""
	I0229 02:18:40.637734  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.637745  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:40.637758  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:40.637775  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.685317  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:40.685351  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:40.735348  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:40.735386  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:40.751373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:40.751434  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:40.822604  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:40.822624  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:40.822637  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:40.311266  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:42.806740  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:44.809136  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:43.357769  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:43.373119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:43.373186  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:43.409160  360776 cri.go:89] found id: ""
	I0229 02:18:43.409181  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.409189  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:43.409195  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:43.409238  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:43.447193  360776 cri.go:89] found id: ""
	I0229 02:18:43.447222  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.447231  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:43.447237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:43.447296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:43.487906  360776 cri.go:89] found id: ""
	I0229 02:18:43.487934  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.487942  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:43.487949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:43.488008  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:43.527968  360776 cri.go:89] found id: ""
	I0229 02:18:43.528002  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.528016  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:43.528024  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:43.528100  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:43.573298  360776 cri.go:89] found id: ""
	I0229 02:18:43.573333  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.573344  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:43.573351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:43.573461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:43.630816  360776 cri.go:89] found id: ""
	I0229 02:18:43.630856  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.630867  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:43.630881  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:43.630954  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:43.701516  360776 cri.go:89] found id: ""
	I0229 02:18:43.701547  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.701559  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:43.701567  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:43.701636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:43.747444  360776 cri.go:89] found id: ""
	I0229 02:18:43.747474  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.747484  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:43.747494  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:43.747510  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:43.828216  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:43.828246  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:43.828270  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:43.874647  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:43.874684  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:43.937776  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:43.937808  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:43.989210  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:43.989250  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.506056  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:46.519717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:46.519784  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:46.585095  360776 cri.go:89] found id: ""
	I0229 02:18:46.585128  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.585141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:46.585149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:46.585212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:46.638520  360776 cri.go:89] found id: ""
	I0229 02:18:46.638553  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.638565  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:46.638572  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:46.638637  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:46.691413  360776 cri.go:89] found id: ""
	I0229 02:18:46.691446  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.691458  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:46.691466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:46.691532  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:46.735054  360776 cri.go:89] found id: ""
	I0229 02:18:46.735083  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.735092  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:46.735098  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:46.735159  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:46.772486  360776 cri.go:89] found id: ""
	I0229 02:18:46.772531  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.772543  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:46.772551  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:46.772610  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:46.815466  360776 cri.go:89] found id: ""
	I0229 02:18:46.815491  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.815499  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:46.815505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:46.815553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:46.853168  360776 cri.go:89] found id: ""
	I0229 02:18:46.853199  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.853212  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:46.853220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:46.853299  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:46.894320  360776 cri.go:89] found id: ""
	I0229 02:18:46.894353  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.894365  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:46.894378  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:46.894394  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:46.944593  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:46.944631  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.960405  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:46.960433  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:47.029929  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:47.029960  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:47.029977  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:47.065292  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:47.065327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:47.308699  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:49.808633  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:49.620521  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:49.636247  360776 kubeadm.go:640] restartCluster took 4m12.880265518s
	W0229 02:18:49.636335  360776 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:18:49.636372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:18:50.114412  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:50.130257  360776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:18:50.141556  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:18:50.152882  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:18:50.152929  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:18:50.213815  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:18:50.213922  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:18:50.341927  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:18:50.342103  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:18:50.342249  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:18:50.577201  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:18:50.578563  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:18:50.587158  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:18:50.712207  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:18:50.714032  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:18:50.714149  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:18:50.716103  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:18:50.717503  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:18:50.718203  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:18:50.719194  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:18:50.719913  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:18:50.721364  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:18:50.722412  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:18:50.723087  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:18:50.723663  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:18:50.723813  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:18:50.724029  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:18:51.003432  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:18:51.145978  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:18:51.230808  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:18:51.340889  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:18:51.341726  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:18:51.343443  360776 out.go:204]   - Booting up control plane ...
	I0229 02:18:51.343564  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:18:51.347723  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:18:51.348592  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:18:51.349514  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:18:51.352720  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:18:52.307313  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:54.806310  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:56.806412  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:58.806973  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:01.306043  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:03.308131  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:05.308210  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:07.807594  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:09.812481  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:12.308103  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:14.310513  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:16.806841  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:18.807740  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:21.306666  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:23.307064  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:25.806451  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:27.806822  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:29.807253  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:31.352923  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:19:31.353370  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:31.353570  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:32.307377  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:34.309850  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:36.354842  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:36.355179  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:36.806074  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:38.807249  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:41.306690  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:43.308582  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:46.356431  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:46.356735  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:45.309102  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:47.808426  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:50.306270  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:52.307628  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:54.806254  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:56.800277  361093 pod_ready.go:81] duration metric: took 4m0.000614636s waiting for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" ...
	E0229 02:19:56.800308  361093 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:19:56.800332  361093 pod_ready.go:38] duration metric: took 4m14.556158159s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:56.800367  361093 kubeadm.go:640] restartCluster took 4m32.656788973s
	W0229 02:19:56.800444  361093 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:19:56.800489  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:20:01.980143  361093 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.179624969s)
	I0229 02:20:01.980234  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:01.996633  361093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:20:02.007422  361093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:20:02.017783  361093 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:20:02.017835  361093 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:20:02.234279  361093 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:20:06.357825  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:06.358110  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:10.891699  361093 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:20:10.891827  361093 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:20:10.891929  361093 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:20:10.892046  361093 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:20:10.892166  361093 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:20:10.892275  361093 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:20:10.893594  361093 out.go:204]   - Generating certificates and keys ...
	I0229 02:20:10.893681  361093 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:20:10.893781  361093 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:20:10.893878  361093 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:20:10.893977  361093 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:20:10.894061  361093 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:20:10.894150  361093 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:20:10.894255  361093 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:20:10.894353  361093 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:20:10.894466  361093 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:20:10.894563  361093 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:20:10.894619  361093 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:20:10.894689  361093 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:20:10.894754  361093 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:20:10.894831  361093 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:20:10.894919  361093 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:20:10.895000  361093 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:20:10.895120  361093 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:20:10.895214  361093 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:20:10.897074  361093 out.go:204]   - Booting up control plane ...
	I0229 02:20:10.897177  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:20:10.897301  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:20:10.897401  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:20:10.897546  361093 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:20:10.897655  361093 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:20:10.897730  361093 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:20:10.897955  361093 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:20:10.898072  361093 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003481 seconds
	I0229 02:20:10.898235  361093 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:20:10.898362  361093 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:20:10.898450  361093 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:20:10.898685  361093 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-665766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:20:10.898770  361093 kubeadm.go:322] [bootstrap-token] Using token: 269xha.46kssuu5kaip43vm
	I0229 02:20:10.899874  361093 out.go:204]   - Configuring RBAC rules ...
	I0229 02:20:10.899970  361093 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:20:10.900078  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:20:10.900198  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:20:10.900334  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:20:10.900513  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:20:10.900636  361093 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:20:10.900771  361093 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:20:10.900814  361093 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:20:10.900864  361093 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:20:10.900874  361093 kubeadm.go:322] 
	I0229 02:20:10.900929  361093 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:20:10.900935  361093 kubeadm.go:322] 
	I0229 02:20:10.901047  361093 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:20:10.901067  361093 kubeadm.go:322] 
	I0229 02:20:10.901106  361093 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:20:10.901184  361093 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:20:10.901249  361093 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:20:10.901259  361093 kubeadm.go:322] 
	I0229 02:20:10.901323  361093 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:20:10.901335  361093 kubeadm.go:322] 
	I0229 02:20:10.901410  361093 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:20:10.901421  361093 kubeadm.go:322] 
	I0229 02:20:10.901485  361093 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:20:10.901585  361093 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:20:10.901691  361093 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:20:10.901702  361093 kubeadm.go:322] 
	I0229 02:20:10.901773  361093 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:20:10.901869  361093 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:20:10.901881  361093 kubeadm.go:322] 
	I0229 02:20:10.901991  361093 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 269xha.46kssuu5kaip43vm \
	I0229 02:20:10.902122  361093 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:20:10.902144  361093 kubeadm.go:322] 	--control-plane 
	I0229 02:20:10.902149  361093 kubeadm.go:322] 
	I0229 02:20:10.902254  361093 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:20:10.902273  361093 kubeadm.go:322] 
	I0229 02:20:10.902377  361093 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 269xha.46kssuu5kaip43vm \
	I0229 02:20:10.902520  361093 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:20:10.902534  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:20:10.902541  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:20:10.904582  361093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:20:10.905676  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:20:10.930137  361093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:20:10.979891  361093 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:20:10.980027  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=embed-certs-665766 minikube.k8s.io/updated_at=2024_02_29T02_20_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:10.980030  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:11.079204  361093 ops.go:34] apiserver oom_adj: -16
	I0229 02:20:11.314252  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:11.814676  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:12.315103  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:12.814906  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:13.314822  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:13.814328  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:14.314397  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:14.814464  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:15.315077  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:15.814758  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:16.314975  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:16.815307  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:17.315305  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:17.814371  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:18.315148  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:18.814336  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:19.314531  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:19.814983  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:20.314365  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:20.815167  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:21.314560  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:21.814519  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:22.315326  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:22.814733  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:23.315210  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:23.460714  361093 kubeadm.go:1088] duration metric: took 12.480754596s to wait for elevateKubeSystemPrivileges.
	I0229 02:20:23.460760  361093 kubeadm.go:406] StartCluster complete in 4m59.384955855s
	I0229 02:20:23.460835  361093 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:20:23.460963  361093 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:20:23.462373  361093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:20:23.462619  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:20:23.462712  361093 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:20:23.462806  361093 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-665766"
	I0229 02:20:23.462833  361093 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-665766"
	I0229 02:20:23.462842  361093 addons.go:69] Setting dashboard=true in profile "embed-certs-665766"
	W0229 02:20:23.462848  361093 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:20:23.462878  361093 addons.go:234] Setting addon dashboard=true in "embed-certs-665766"
	W0229 02:20:23.462887  361093 addons.go:243] addon dashboard should already be in state true
	I0229 02:20:23.462885  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:20:23.462865  361093 addons.go:69] Setting metrics-server=true in profile "embed-certs-665766"
	I0229 02:20:23.462912  361093 addons.go:234] Setting addon metrics-server=true in "embed-certs-665766"
	I0229 02:20:23.462837  361093 addons.go:69] Setting default-storageclass=true in profile "embed-certs-665766"
	I0229 02:20:23.462940  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	W0229 02:20:23.462921  361093 addons.go:243] addon metrics-server should already be in state true
	I0229 02:20:23.462988  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.462939  361093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-665766"
	I0229 02:20:23.462940  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.463367  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463390  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463409  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463414  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463390  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463448  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463573  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463594  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.484706  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0229 02:20:23.484734  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0229 02:20:23.484744  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0229 02:20:23.484867  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0229 02:20:23.485323  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485340  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485376  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485416  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485852  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485859  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485870  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485878  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.485875  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.485887  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.486261  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486314  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486428  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.486441  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.486554  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486728  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.486962  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.487011  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.487123  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.487168  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.487916  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.488429  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.488468  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.490061  361093 addons.go:234] Setting addon default-storageclass=true in "embed-certs-665766"
	W0229 02:20:23.490105  361093 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:20:23.490135  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.490519  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.490554  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.505714  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I0229 02:20:23.506382  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.506952  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0229 02:20:23.507108  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.507125  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.507297  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.507838  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.508574  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.508601  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.508856  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0229 02:20:23.509055  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I0229 02:20:23.509239  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.509409  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.509420  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.509928  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.509971  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.510020  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.510043  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.510427  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.510446  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.510456  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.510457  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.510836  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.510844  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.511614  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.512674  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.512911  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.514837  361093 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:20:23.516144  361093 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:20:23.513612  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.518587  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:20:23.518631  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:20:23.519750  361093 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:20:23.520898  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:20:23.520912  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:20:23.520925  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.519796  361093 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:20:23.519826  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.522245  361093 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:20:23.522263  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:20:23.522279  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.525267  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.525478  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0229 02:20:23.525918  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.525942  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526065  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.526171  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526249  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.526364  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.526620  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.526677  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.526706  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526865  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526876  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.526891  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.527094  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.527286  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.527370  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.527392  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.527414  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.527426  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.527431  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.527440  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.527600  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.527770  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.527837  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.527921  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.528137  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.529551  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.529764  361093 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:20:23.529779  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:20:23.529795  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.532530  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.532935  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.532987  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.533201  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.533347  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.533475  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.533597  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.717181  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:20:23.718730  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:20:23.718746  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:20:23.751609  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:20:23.751628  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:20:23.774666  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:20:23.783425  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:20:23.783444  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:20:23.799321  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:20:23.843414  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:20:23.843438  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:20:23.857004  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:20:23.857027  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:20:23.930205  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:20:23.930233  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:20:23.943684  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:20:23.970259  361093 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-665766" context rescaled to 1 replicas
	I0229 02:20:23.970298  361093 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:20:23.972009  361093 out.go:177] * Verifying Kubernetes components...
	I0229 02:20:23.973240  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:24.061065  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:20:24.061103  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:20:24.147407  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:20:24.147441  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:20:24.204201  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:20:24.204236  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:20:24.243191  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:20:24.243237  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:20:24.263274  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:20:24.263299  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:20:24.283356  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:20:24.283374  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:20:24.303371  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:20:25.432821  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.715600333s)
	I0229 02:20:25.432877  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.432884  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.433179  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.433198  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.433214  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.433223  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.433233  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:25.433477  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.433499  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.433519  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:25.441485  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.441506  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.441772  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.441788  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.803307  361093 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.028599375s)
	I0229 02:20:25.803341  361093 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 02:20:26.329323  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.529964751s)
	I0229 02:20:26.329380  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.329389  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.329754  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.329817  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.329838  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.329836  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:26.329847  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.330130  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.330149  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.330176  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:26.411660  361093 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.438378455s)
	I0229 02:20:26.411727  361093 node_ready.go:35] waiting up to 6m0s for node "embed-certs-665766" to be "Ready" ...
	I0229 02:20:26.411785  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.468059693s)
	I0229 02:20:26.411846  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.411904  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.412327  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.412378  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.412400  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.412418  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.412733  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.412759  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.412778  361093 addons.go:470] Verifying addon metrics-server=true in "embed-certs-665766"
	I0229 02:20:26.429799  361093 node_ready.go:49] node "embed-certs-665766" has status "Ready":"True"
	I0229 02:20:26.429834  361093 node_ready.go:38] duration metric: took 18.091958ms waiting for node "embed-certs-665766" to be "Ready" ...
	I0229 02:20:26.429848  361093 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:20:26.443918  361093 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.453871  361093 pod_ready.go:92] pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.453893  361093 pod_ready.go:81] duration metric: took 9.938572ms waiting for pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.453902  361093 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.459920  361093 pod_ready.go:92] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.459946  361093 pod_ready.go:81] duration metric: took 6.037204ms waiting for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.459959  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.465595  361093 pod_ready.go:92] pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.465611  361093 pod_ready.go:81] duration metric: took 5.645555ms waiting for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.465620  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.470943  361093 pod_ready.go:92] pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.470960  361093 pod_ready.go:81] duration metric: took 5.334268ms waiting for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.470968  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gtjq6" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.815785  361093 pod_ready.go:92] pod "kube-proxy-gtjq6" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.815809  361093 pod_ready.go:81] duration metric: took 344.835753ms waiting for pod "kube-proxy-gtjq6" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.815820  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:27.179678  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.87625995s)
	I0229 02:20:27.179741  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:27.179758  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:27.180115  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:27.180169  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:27.180191  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:27.180201  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:27.180212  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:27.180476  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:27.180521  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:27.180534  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:27.182123  361093 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-665766 addons enable metrics-server
	
	I0229 02:20:27.183370  361093 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0229 02:20:27.184639  361093 addons.go:505] enable addons completed in 3.721930887s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0229 02:20:27.223120  361093 pod_ready.go:92] pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:27.223149  361093 pod_ready.go:81] duration metric: took 407.321396ms waiting for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:27.223163  361093 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:29.231076  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:31.729827  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:33.745431  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:36.231699  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:38.238868  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:40.733145  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:43.231183  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:46.359040  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:46.359315  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:46.359346  360776 kubeadm.go:322] 
	I0229 02:20:46.359398  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:20:46.359458  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:20:46.359467  360776 kubeadm.go:322] 
	I0229 02:20:46.359511  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:20:46.359565  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:20:46.359711  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:20:46.359720  360776 kubeadm.go:322] 
	I0229 02:20:46.359823  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:20:46.359867  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:20:46.359894  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:20:46.359900  360776 kubeadm.go:322] 
	I0229 02:20:46.360005  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:20:46.360128  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:20:46.360236  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:20:46.360310  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:20:46.360381  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:20:46.360410  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:20:46.361502  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:20:46.361603  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:20:46.361688  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:20:46.361890  360776 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:20:46.361946  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:20:46.833083  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:46.850670  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:20:46.863291  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:20:46.863352  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:20:46.929466  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:20:46.929532  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:20:47.064941  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:20:47.065277  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:20:47.065515  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:20:47.284721  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:20:47.285859  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:20:47.295028  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:20:47.429614  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:20:47.431229  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:20:47.431315  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:20:47.431389  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:20:47.431487  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:20:47.431603  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:20:47.431719  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:20:47.431796  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:20:47.431890  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:20:47.431974  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:20:47.432093  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:20:47.432212  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:20:47.432275  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:20:47.432366  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:20:47.946255  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:20:48.258186  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:20:48.398982  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:20:48.545961  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:20:48.546829  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:20:45.234594  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:47.731325  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:49.731500  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:48.548500  360776 out.go:204]   - Booting up control plane ...
	I0229 02:20:48.548614  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:20:48.552604  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:20:48.553548  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:20:48.554256  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:20:48.558508  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:20:52.231128  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:54.231680  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:56.730802  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:58.731112  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:01.232479  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:03.234385  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:05.730268  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:08.231970  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:10.233205  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:12.734859  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:15.230796  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:17.231363  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:19.231526  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:21.731071  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:23.732749  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:26.230929  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:28.731131  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:28.560199  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:21:28.560645  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:28.560944  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:31.231022  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:33.731025  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:33.561853  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:33.562057  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:35.731752  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:38.229754  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:40.229986  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:42.730384  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:44.730788  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:43.562844  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:43.563063  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:46.731643  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:49.232075  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:51.729864  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:53.730399  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:55.730728  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:57.732563  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:00.232769  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:02.233327  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:04.730582  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:03.563980  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:03.564274  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:06.730978  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:08.731753  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:10.733273  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:13.230888  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:15.231384  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:17.233309  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:19.736876  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:22.231745  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:24.730148  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:26.730332  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:28.731241  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:31.232262  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:33.729969  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:36.230298  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:38.232199  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:43.566143  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:43.566419  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:43.566432  360776 kubeadm.go:322] 
	I0229 02:22:43.566494  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:22:43.566562  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:22:43.566573  360776 kubeadm.go:322] 
	I0229 02:22:43.566621  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:22:43.566669  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:22:43.566789  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:22:43.566798  360776 kubeadm.go:322] 
	I0229 02:22:43.566954  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:22:43.567000  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:22:43.567049  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:22:43.567060  360776 kubeadm.go:322] 
	I0229 02:22:43.567282  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:22:43.567417  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:22:43.567521  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:22:43.567592  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:22:43.567684  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:22:43.567736  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:22:43.568136  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:22:43.568244  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:22:43.568368  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:22:43.568439  360776 kubeadm.go:406] StartCluster complete in 8m6.863500244s
	I0229 02:22:43.568498  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:22:43.568644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:22:43.619887  360776 cri.go:89] found id: ""
	I0229 02:22:43.619917  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.619926  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:22:43.619932  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:22:43.619996  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:22:43.658073  360776 cri.go:89] found id: ""
	I0229 02:22:43.658110  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.658120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:22:43.658127  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:22:43.658197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:22:43.697445  360776 cri.go:89] found id: ""
	I0229 02:22:43.697476  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.697489  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:22:43.697495  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:22:43.697561  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:22:43.736241  360776 cri.go:89] found id: ""
	I0229 02:22:43.736270  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.736278  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:22:43.736285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:22:43.736345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:22:43.775185  360776 cri.go:89] found id: ""
	I0229 02:22:43.775212  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.775221  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:22:43.775227  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:22:43.775292  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:22:43.815309  360776 cri.go:89] found id: ""
	I0229 02:22:43.815338  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.815347  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:22:43.815353  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:22:43.815436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:22:43.860248  360776 cri.go:89] found id: ""
	I0229 02:22:43.860284  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.860296  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:22:43.860305  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:22:43.860375  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:22:43.918615  360776 cri.go:89] found id: ""
	I0229 02:22:43.918644  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.918656  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:22:43.918671  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:22:43.918687  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:22:43.966006  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:22:43.966045  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:22:43.981843  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:22:43.981875  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:22:44.056838  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:22:44.056870  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:22:44.056887  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:22:44.090353  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:22:44.090384  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:22:44.143169  360776 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:22:44.143235  360776 out.go:239] * 
	W0229 02:22:44.143336  360776 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.143366  360776 out.go:239] * 
	W0229 02:22:44.144361  360776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:22:44.147267  360776 out.go:177] 
	W0229 02:22:44.148417  360776 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.148458  360776 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:22:44.148476  360776 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:22:44.149710  360776 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> containerd <==
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165624877Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165698335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165745697Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165787935Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165917244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165968270Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166006973Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166044436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166543615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseR
untimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMiss
ingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166717042Z" level=info msg="Connect containerd service"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166807336Z" level=info msg="using legacy CRI server"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166857305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166925237Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.168440964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169467518Z" level=info msg="Start subscribing containerd event"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169852898Z" level=info msg="Start recovering state"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169759950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.170354766Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216434996Z" level=info msg="Start event monitor"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216570893Z" level=info msg="Start snapshots syncer"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216584766Z" level=info msg="Start cni network conf syncer for default"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216590881Z" level=info msg="Start streaming server"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216768197Z" level=info msg="containerd successfully booted in 0.090655s"
	Feb 29 02:18:50 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:18:50.110070145Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/87-podman-bridge.conflist.mk_disabled\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 02:18:50 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:18:50.110410570Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/.keep\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 02:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.634203] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.396865] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.706137] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.508894] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +0.058297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061765] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +0.223600] systemd-fstab-generator[503]: Ignoring "noauto" option for root device
	[  +0.145548] systemd-fstab-generator[515]: Ignoring "noauto" option for root device
	[  +0.315865] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +6.792896] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.059937] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.232197] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.066766] kauditd_printk_skb: 18 callbacks suppressed
	[Feb29 02:18] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.063045] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 02:20] systemd-fstab-generator[9666]: Ignoring "noauto" option for root device
	[  +0.073310] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:22:45 up 8 min,  0 users,  load average: 0.44, 0.43, 0.22
	Linux old-k8s-version-254968 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:22:43 old-k8s-version-254968 kubelet[11323]: F0229 02:22:43.900340   11323 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:22:43 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:22:43 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:22:44 old-k8s-version-254968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Feb 29 02:22:44 old-k8s-version-254968 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:22:44 old-k8s-version-254968 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:22:44 old-k8s-version-254968 kubelet[11365]: I0229 02:22:44.702132   11365 server.go:410] Version: v1.16.0
	Feb 29 02:22:44 old-k8s-version-254968 kubelet[11365]: I0229 02:22:44.702478   11365 plugins.go:100] No cloud provider specified.
	Feb 29 02:22:44 old-k8s-version-254968 kubelet[11365]: I0229 02:22:44.702488   11365 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:22:44 old-k8s-version-254968 kubelet[11365]: I0229 02:22:44.706339   11365 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:22:44 old-k8s-version-254968 kubelet[11365]: W0229 02:22:44.708438   11365 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:22:44 old-k8s-version-254968 kubelet[11365]: F0229 02:22:44.708880   11365 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:22:44 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:22:44 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:22:45 old-k8s-version-254968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 157.
	Feb 29 02:22:45 old-k8s-version-254968 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:22:45 old-k8s-version-254968 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:22:45 old-k8s-version-254968 kubelet[11393]: I0229 02:22:45.445915   11393 server.go:410] Version: v1.16.0
	Feb 29 02:22:45 old-k8s-version-254968 kubelet[11393]: I0229 02:22:45.446147   11393 plugins.go:100] No cloud provider specified.
	Feb 29 02:22:45 old-k8s-version-254968 kubelet[11393]: I0229 02:22:45.446211   11393 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:22:45 old-k8s-version-254968 kubelet[11393]: I0229 02:22:45.449103   11393 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:22:45 old-k8s-version-254968 kubelet[11393]: W0229 02:22:45.450091   11393 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:22:45 old-k8s-version-254968 kubelet[11393]: F0229 02:22:45.451324   11393 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:22:45 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:22:45 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (251.616891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-254968" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (519.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:22:57.969565  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:23:37.942688  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/default-k8s-diff-port-254367/client.crt: no such file or directory
E0229 02:23:38.335459  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/no-preload-907398/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:23:46.416855  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:24:18.530187  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:24:28.597093  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:25:17.834890  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:25:48.921954  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:25:51.646630  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:25:54.098380  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/default-k8s-diff-port-254367/client.crt: no such file or directory
E0229 02:25:54.493077  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/no-preload-907398/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:26:05.889235  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:26:14.620919  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:26:21.783190  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/default-k8s-diff-port-254367/client.crt: no such file or directory
E0229 02:26:22.175764  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/no-preload-907398/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:26:40.882419  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:26:52.590585  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:27:11.967279  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:27:28.932283  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:27:57.969880  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:28:15.636059  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:28:46.415827  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:29:18.529695  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:29:21.015092  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:29:28.597066  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:30:09.461649  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:30:17.835294  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:30:41.572975  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:30:48.921242  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:30:54.098200  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/default-k8s-diff-port-254367/client.crt: no such file or directory
E0229 02:30:54.492749  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/no-preload-907398/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:31:05.889791  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:31:14.620711  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (274.062174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-254968" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (243.650565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-254968 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-254968 logs -n 25: (1.13601234s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-254367       | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-665766            | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-254968                              | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-254968             | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-254968                              | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-665766                 | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | no-preload-907398 image list                           | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	| delete  | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	| image   | default-k8s-diff-port-254367                           | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	| image   | embed-certs-665766 image list                          | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| delete  | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:15:00
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:15:00.195513  361093 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:15:00.195780  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:15:00.195791  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:15:00.195798  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:15:00.196014  361093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:15:00.196538  361093 out.go:298] Setting JSON to false
	I0229 02:15:00.197510  361093 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7044,"bootTime":1709165856,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:15:00.197578  361093 start.go:139] virtualization: kvm guest
	I0229 02:15:00.199670  361093 out.go:177] * [embed-certs-665766] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:15:00.201014  361093 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:15:00.202314  361093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:15:00.201016  361093 notify.go:220] Checking for updates...
	I0229 02:15:00.204683  361093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:15:00.205981  361093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:15:00.207104  361093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:15:00.208151  361093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:15:00.209800  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:15:00.210427  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.210478  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.226129  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35133
	I0229 02:15:00.226543  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.227211  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.227260  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.227606  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.227858  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.228153  361093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:15:00.228600  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.228648  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.244111  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0229 02:15:00.244523  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.244927  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.244955  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.245291  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.245488  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.279319  361093 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:15:00.280565  361093 start.go:299] selected driver: kvm2
	I0229 02:15:00.280576  361093 start.go:903] validating driver "kvm2" against &{Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:00.280689  361093 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:15:00.281579  361093 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:15:00.281718  361093 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:15:00.296404  361093 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:15:00.296764  361093 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:15:00.296834  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:00.296847  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:00.296856  361093 start_flags.go:323] config:
	{Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:00.296993  361093 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:15:00.298652  361093 out.go:177] * Starting control plane node embed-certs-665766 in cluster embed-certs-665766
	I0229 02:15:00.299785  361093 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:15:00.299837  361093 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 02:15:00.299848  361093 cache.go:56] Caching tarball of preloaded images
	I0229 02:15:00.299924  361093 preload.go:174] Found /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:15:00.299936  361093 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 02:15:00.300040  361093 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/config.json ...
	I0229 02:15:00.300211  361093 start.go:365] acquiring machines lock for embed-certs-665766: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:15:00.300253  361093 start.go:369] acquired machines lock for "embed-certs-665766" in 22.524µs
	I0229 02:15:00.300268  361093 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:15:00.300281  361093 fix.go:54] fixHost starting: 
	I0229 02:15:00.300618  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.300658  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.315579  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0229 02:15:00.315993  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.316460  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.316481  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.316776  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.317012  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.317164  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:15:00.318770  361093 fix.go:102] recreateIfNeeded on embed-certs-665766: state=Stopped err=<nil>
	I0229 02:15:00.318802  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	W0229 02:15:00.318984  361093 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:15:00.320597  361093 out.go:177] * Restarting existing kvm2 VM for "embed-certs-665766" ...
	I0229 02:14:57.672798  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.172654  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.673282  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.173312  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.672878  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.172953  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.673170  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.173005  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.672595  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:02.172649  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.736314  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:00.738234  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:02.738646  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:14:59.777395  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:01.781443  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:00.321860  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Start
	I0229 02:15:00.322009  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring networks are active...
	I0229 02:15:00.322780  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring network default is active
	I0229 02:15:00.323102  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring network mk-embed-certs-665766 is active
	I0229 02:15:00.323540  361093 main.go:141] libmachine: (embed-certs-665766) Getting domain xml...
	I0229 02:15:00.324206  361093 main.go:141] libmachine: (embed-certs-665766) Creating domain...
	I0229 02:15:01.564400  361093 main.go:141] libmachine: (embed-certs-665766) Waiting to get IP...
	I0229 02:15:01.565163  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:01.565606  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:01.565665  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:01.565569  361128 retry.go:31] will retry after 283.275743ms: waiting for machine to come up
	I0229 02:15:01.850148  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:01.850742  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:01.850796  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:01.850687  361128 retry.go:31] will retry after 252.966549ms: waiting for machine to come up
	I0229 02:15:02.105129  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:02.105699  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:02.105732  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:02.105660  361128 retry.go:31] will retry after 470.28664ms: waiting for machine to come up
	I0229 02:15:02.577216  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:02.577778  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:02.577807  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:02.577721  361128 retry.go:31] will retry after 527.191742ms: waiting for machine to come up
	I0229 02:15:03.106209  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:03.106698  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:03.106725  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:03.106650  361128 retry.go:31] will retry after 472.107889ms: waiting for machine to come up
	I0229 02:15:03.580375  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:03.580945  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:03.580972  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:03.580890  361128 retry.go:31] will retry after 683.066759ms: waiting for machine to come up
	I0229 02:15:04.265769  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:04.266340  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:04.266370  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:04.266282  361128 retry.go:31] will retry after 1.031418978s: waiting for machine to come up
	I0229 02:15:02.673169  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.173251  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.672864  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.173580  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.173278  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.672747  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.173514  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.672853  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:07.173295  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.238704  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:07.736326  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:04.278766  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:06.779170  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:05.299213  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:05.299740  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:05.299773  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:05.299673  361128 retry.go:31] will retry after 1.037425014s: waiting for machine to come up
	I0229 02:15:06.339189  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:06.339656  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:06.339688  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:06.339607  361128 retry.go:31] will retry after 1.829261156s: waiting for machine to come up
	I0229 02:15:08.171250  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:08.171913  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:08.171940  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:08.171868  361128 retry.go:31] will retry after 1.840049442s: waiting for machine to come up
	I0229 02:15:10.015035  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:10.015601  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:10.015624  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:10.015545  361128 retry.go:31] will retry after 2.792261425s: waiting for machine to come up
	I0229 02:15:07.673496  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.173235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.672970  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.173203  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.672669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.172971  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.673523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.172857  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.672596  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:12.173541  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.236392  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:12.241873  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:09.277845  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:11.280119  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:13.777454  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:12.811472  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:12.812070  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:12.812092  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:12.812028  361128 retry.go:31] will retry after 3.422816729s: waiting for machine to come up
	I0229 02:15:12.673205  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.173523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.672774  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.173115  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.673616  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.172831  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.673160  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.673287  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:17.172640  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.243740  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:16.736133  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:15.778484  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:17.778658  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:16.236374  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:16.236943  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:16.236973  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:16.236905  361128 retry.go:31] will retry after 3.865566322s: waiting for machine to come up
	I0229 02:15:20.106964  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.107455  361093 main.go:141] libmachine: (embed-certs-665766) Found IP for machine: 192.168.39.252
	I0229 02:15:20.107480  361093 main.go:141] libmachine: (embed-certs-665766) Reserving static IP address...
	I0229 02:15:20.107494  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has current primary IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.107964  361093 main.go:141] libmachine: (embed-certs-665766) Reserved static IP address: 192.168.39.252
	I0229 02:15:20.107994  361093 main.go:141] libmachine: (embed-certs-665766) Waiting for SSH to be available...
	I0229 02:15:20.108041  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "embed-certs-665766", mac: "52:54:00:0f:ed:e3", ip: "192.168.39.252"} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.108074  361093 main.go:141] libmachine: (embed-certs-665766) DBG | skip adding static IP to network mk-embed-certs-665766 - found existing host DHCP lease matching {name: "embed-certs-665766", mac: "52:54:00:0f:ed:e3", ip: "192.168.39.252"}
	I0229 02:15:20.108095  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Getting to WaitForSSH function...
	I0229 02:15:20.110175  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.110485  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.110511  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.110667  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Using SSH client type: external
	I0229 02:15:20.110696  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa (-rw-------)
	I0229 02:15:20.110761  361093 main.go:141] libmachine: (embed-certs-665766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:15:20.110788  361093 main.go:141] libmachine: (embed-certs-665766) DBG | About to run SSH command:
	I0229 02:15:20.110807  361093 main.go:141] libmachine: (embed-certs-665766) DBG | exit 0
	I0229 02:15:17.672587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.173318  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.673512  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.673611  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.172605  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.173587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.673298  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:22.172625  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.238381  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:21.736665  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:20.246600  361093 main.go:141] libmachine: (embed-certs-665766) DBG | SSH cmd err, output: <nil>: 
	I0229 02:15:20.247008  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetConfigRaw
	I0229 02:15:20.247628  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:20.250151  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.250492  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.250524  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.250769  361093 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/config.json ...
	I0229 02:15:20.251020  361093 machine.go:88] provisioning docker machine ...
	I0229 02:15:20.251044  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:20.251255  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.251442  361093 buildroot.go:166] provisioning hostname "embed-certs-665766"
	I0229 02:15:20.251465  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.251607  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.253793  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.254144  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.254176  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.254345  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.254528  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.254701  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.254886  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.255075  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:20.255290  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:20.255302  361093 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-665766 && echo "embed-certs-665766" | sudo tee /etc/hostname
	I0229 02:15:20.387006  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-665766
	
	I0229 02:15:20.387037  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.389660  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.390034  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.390075  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.390263  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.390512  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.390720  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.390846  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.391013  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:20.391195  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:20.391212  361093 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-665766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-665766/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-665766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:15:20.517065  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:15:20.517117  361093 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:15:20.517171  361093 buildroot.go:174] setting up certificates
	I0229 02:15:20.517189  361093 provision.go:83] configureAuth start
	I0229 02:15:20.517207  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.517534  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:20.520639  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.521028  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.521062  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.521231  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.523702  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.524078  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.524128  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.524228  361093 provision.go:138] copyHostCerts
	I0229 02:15:20.524293  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:15:20.524319  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:15:20.524405  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:15:20.524527  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:15:20.524537  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:15:20.524583  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:15:20.524674  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:15:20.524684  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:15:20.524718  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:15:20.524803  361093 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-665766 san=[192.168.39.252 192.168.39.252 localhost 127.0.0.1 minikube embed-certs-665766]
	I0229 02:15:20.822225  361093 provision.go:172] copyRemoteCerts
	I0229 02:15:20.822298  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:15:20.822346  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.825396  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.825833  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.825863  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.826114  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.826349  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.826496  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.826626  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:20.915099  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:15:20.942985  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:15:20.974642  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:15:21.002039  361093 provision.go:86] duration metric: configureAuth took 484.832048ms
	I0229 02:15:21.002101  361093 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:15:21.002327  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:15:21.002341  361093 machine.go:91] provisioned docker machine in 751.30636ms
	I0229 02:15:21.002350  361093 start.go:300] post-start starting for "embed-certs-665766" (driver="kvm2")
	I0229 02:15:21.002361  361093 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:15:21.002433  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.002803  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:15:21.002843  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.005633  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.006105  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.006141  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.006336  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.006562  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.006784  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.006972  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.094951  361093 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:15:21.100607  361093 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:15:21.100637  361093 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:15:21.100736  361093 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:15:21.100864  361093 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:15:21.101000  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:15:21.113280  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:15:21.142831  361093 start.go:303] post-start completed in 140.464811ms
	I0229 02:15:21.142864  361093 fix.go:56] fixHost completed within 20.842581853s
	I0229 02:15:21.142977  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.145855  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.146221  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.146273  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.146427  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.146675  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.146826  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.146946  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.147137  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:21.147306  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:21.147316  361093 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:15:21.267552  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172921.247201349
	
	I0229 02:15:21.267579  361093 fix.go:206] guest clock: 1709172921.247201349
	I0229 02:15:21.267590  361093 fix.go:219] Guest: 2024-02-29 02:15:21.247201349 +0000 UTC Remote: 2024-02-29 02:15:21.142869918 +0000 UTC m=+21.001592109 (delta=104.331431ms)
	I0229 02:15:21.267644  361093 fix.go:190] guest clock delta is within tolerance: 104.331431ms
	I0229 02:15:21.267653  361093 start.go:83] releasing machines lock for "embed-certs-665766", held for 20.967392077s
	I0229 02:15:21.267681  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.267949  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:21.270730  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.271194  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.271223  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.271559  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272366  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272582  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272673  361093 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:15:21.272718  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.272844  361093 ssh_runner.go:195] Run: cat /version.json
	I0229 02:15:21.272867  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.276061  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276385  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276515  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.276563  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276647  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.276673  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276693  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.276843  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.276926  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.277031  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.277103  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.277160  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.277254  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.277316  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.380428  361093 ssh_runner.go:195] Run: systemctl --version
	I0229 02:15:21.387150  361093 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:15:21.393537  361093 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:15:21.393595  361093 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:15:21.411579  361093 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:15:21.411609  361093 start.go:475] detecting cgroup driver to use...
	I0229 02:15:21.411682  361093 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:15:21.442122  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:21.457974  361093 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:15:21.458041  361093 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:15:21.474421  361093 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:15:21.490462  361093 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:15:21.618342  361093 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:15:21.802579  361093 docker.go:233] disabling docker service ...
	I0229 02:15:21.802649  361093 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:15:21.818349  361093 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:15:21.832338  361093 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:15:21.975684  361093 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:15:22.118703  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:15:22.134525  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:22.155421  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:15:22.166809  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:15:22.180082  361093 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:15:22.180163  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:15:22.195414  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:22.206812  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:15:22.217930  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:22.229893  361093 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:15:22.244345  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:15:22.255766  361093 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:15:22.265968  361093 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:15:22.266042  361093 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:15:22.280500  361093 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:15:22.290749  361093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:22.447260  361093 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:15:22.489965  361093 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:15:22.490049  361093 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:15:22.495946  361093 retry.go:31] will retry after 681.640314ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:15:23.178613  361093 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:15:23.186465  361093 start.go:543] Will wait 60s for crictl version
	I0229 02:15:23.186531  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:15:23.191421  361093 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:15:23.240728  361093 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:15:23.240833  361093 ssh_runner.go:195] Run: containerd --version
	I0229 02:15:23.271700  361093 ssh_runner.go:195] Run: containerd --version
	I0229 02:15:23.311413  361093 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 02:15:20.278855  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:22.776938  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:23.312543  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:23.315197  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:23.315505  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:23.315541  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:23.315774  361093 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:15:23.321091  361093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:23.335366  361093 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:15:23.335482  361093 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:15:23.380351  361093 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:15:23.380391  361093 containerd.go:519] Images already preloaded, skipping extraction
	I0229 02:15:23.380462  361093 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:15:23.421267  361093 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:15:23.421295  361093 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:15:23.421374  361093 ssh_runner.go:195] Run: sudo crictl info
	I0229 02:15:23.460765  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:23.460802  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:23.460841  361093 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:15:23.460868  361093 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.252 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-665766 NodeName:embed-certs-665766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:15:23.461060  361093 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-665766"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.252
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:15:23.461154  361093 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-665766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:15:23.461223  361093 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:15:23.472810  361093 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:15:23.472873  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:15:23.483214  361093 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (392 bytes)
	I0229 02:15:23.502301  361093 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:15:23.522993  361093 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0229 02:15:23.543866  361093 ssh_runner.go:195] Run: grep 192.168.39.252	control-plane.minikube.internal$ /etc/hosts
	I0229 02:15:23.548448  361093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:23.561909  361093 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766 for IP: 192.168.39.252
	I0229 02:15:23.561962  361093 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:23.562164  361093 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 02:15:23.562207  361093 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 02:15:23.562316  361093 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/client.key
	I0229 02:15:23.562390  361093 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.key.ba3365be
	I0229 02:15:23.562442  361093 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.key
	I0229 02:15:23.562597  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 02:15:23.562642  361093 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 02:15:23.562657  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:15:23.562691  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:15:23.562725  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:15:23.562747  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 02:15:23.562787  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:15:23.563460  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:15:23.592672  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:15:23.620893  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:15:23.648810  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:15:23.677012  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:15:23.704430  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:15:23.736296  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:15:23.765295  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:15:23.796388  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:15:23.824848  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 02:15:23.852786  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 02:15:23.882944  361093 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:15:23.907836  361093 ssh_runner.go:195] Run: openssl version
	I0229 02:15:23.916052  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:15:23.930370  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.937378  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.937461  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.944482  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:15:23.956702  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 02:15:23.968559  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.974129  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.974207  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.980916  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 02:15:23.993131  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 02:15:24.005391  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.010645  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.010717  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.017160  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:15:24.029150  361093 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:15:24.033893  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:15:24.040509  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:15:24.047587  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:15:24.054651  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:15:24.061675  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:15:24.068724  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:15:24.075815  361093 kubeadm.go:404] StartCluster: {Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:24.075975  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 02:15:24.076030  361093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:15:24.117750  361093 cri.go:89] found id: "b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549"
	I0229 02:15:24.117784  361093 cri.go:89] found id: "42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630"
	I0229 02:15:24.117789  361093 cri.go:89] found id: "88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662"
	I0229 02:15:24.117793  361093 cri.go:89] found id: "a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348"
	I0229 02:15:24.117797  361093 cri.go:89] found id: "b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb"
	I0229 02:15:24.117806  361093 cri.go:89] found id: "05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4"
	I0229 02:15:24.117810  361093 cri.go:89] found id: "2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd"
	I0229 02:15:24.117814  361093 cri.go:89] found id: "8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3"
	I0229 02:15:24.117820  361093 cri.go:89] found id: ""
	I0229 02:15:24.117872  361093 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0229 02:15:24.132769  361093 cri.go:116] JSON = null
	W0229 02:15:24.132821  361093 kubeadm.go:411] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0229 02:15:24.132878  361093 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:15:24.143554  361093 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:15:24.143571  361093 kubeadm.go:636] restartCluster start
	I0229 02:15:24.143614  361093 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:15:24.154226  361093 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:24.154952  361093 kubeconfig.go:135] verify returned: extract IP: "embed-certs-665766" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:15:24.155312  361093 kubeconfig.go:146] "embed-certs-665766" context is missing from /home/jenkins/minikube-integration/18063-309085/kubeconfig - will repair!
	I0229 02:15:24.155887  361093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:24.157235  361093 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:15:24.167314  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:24.167357  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:24.183158  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:24.667580  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:24.667698  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:24.684726  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:25.168335  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:25.168431  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:25.186032  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:22.672998  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.173387  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.673270  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.173552  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.673074  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.173423  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.673502  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.173531  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.672644  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:27.173372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.737162  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:26.235726  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:24.782276  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:27.278368  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:25.667972  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:25.668059  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:25.683528  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:26.168096  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:26.168217  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:26.187348  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:26.667839  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:26.667920  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:26.681557  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.168163  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:27.168262  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:27.182779  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.667408  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:27.667531  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:27.685526  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:28.167636  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:28.167744  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:28.182746  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:28.668333  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:28.668407  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:28.682544  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:29.168119  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:29.168237  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:29.186304  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:29.667836  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:29.667914  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:29.682884  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:30.167618  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:30.167731  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:30.183089  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.672738  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.173326  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.673063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.173178  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.673323  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.173306  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.673429  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.172889  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.672643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:32.173215  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.239896  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:30.735621  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:32.736326  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:29.278986  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:31.777035  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:33.777456  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:30.667487  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:30.667592  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:30.685344  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:31.167811  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:31.167925  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:31.185254  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:31.667737  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:31.667837  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:31.681151  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:32.167727  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:32.167846  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:32.188215  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:32.667436  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:32.667540  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:32.683006  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:33.167461  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:33.167553  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:33.180891  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:33.667404  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:33.667497  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:33.686220  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:34.167884  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:34.167985  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:34.181808  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:34.181848  361093 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:15:34.181863  361093 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:15:34.181878  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0229 02:15:34.181945  361093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:15:34.226002  361093 cri.go:89] found id: "b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549"
	I0229 02:15:34.226036  361093 cri.go:89] found id: "42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630"
	I0229 02:15:34.226043  361093 cri.go:89] found id: "88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662"
	I0229 02:15:34.226048  361093 cri.go:89] found id: "a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348"
	I0229 02:15:34.226052  361093 cri.go:89] found id: "b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb"
	I0229 02:15:34.226058  361093 cri.go:89] found id: "05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4"
	I0229 02:15:34.226062  361093 cri.go:89] found id: "2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd"
	I0229 02:15:34.226067  361093 cri.go:89] found id: "8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3"
	I0229 02:15:34.226072  361093 cri.go:89] found id: ""
	I0229 02:15:34.226101  361093 cri.go:234] Stopping containers: [b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549 42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630 88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662 a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348 b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb 05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4 2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd 8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3]
	I0229 02:15:34.226179  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:15:34.230963  361093 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549 42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630 88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662 a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348 b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb 05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4 2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd 8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3
	I0229 02:15:34.280013  361093 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:15:34.303092  361093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:15:34.313538  361093 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:34.313601  361093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:34.324217  361093 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:34.324245  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:34.474732  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:32.672712  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.172874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.672874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.173296  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.673021  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.172643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.672743  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.172648  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.673171  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.172582  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.237112  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:37.240703  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:35.779547  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:37.779743  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:35.326453  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.551798  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.634250  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.722113  361093 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:15:35.722208  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.222305  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.723392  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.223304  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.251520  361093 api_server.go:72] duration metric: took 1.52940545s to wait for apiserver process to appear ...
	I0229 02:15:37.251556  361093 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:15:37.251583  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:37.252131  361093 api_server.go:269] stopped: https://192.168.39.252:8443/healthz: Get "https://192.168.39.252:8443/healthz": dial tcp 192.168.39.252:8443: connect: connection refused
	I0229 02:15:37.751668  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.172368  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:15:40.172411  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:15:40.172431  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.219812  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:15:40.219848  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:15:40.251758  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.277955  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:40.277987  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:40.751985  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.760486  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:40.760517  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:41.252018  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:41.266211  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:41.266256  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:41.751788  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:41.761815  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0229 02:15:41.772061  361093 api_server.go:141] control plane version: v1.28.4
	I0229 02:15:41.772105  361093 api_server.go:131] duration metric: took 4.520539314s to wait for apiserver health ...
	I0229 02:15:41.772119  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:41.772128  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:41.774160  361093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:15:37.672994  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.172969  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.673225  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.173291  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.673458  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.172766  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.672830  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.173174  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.672618  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:42.172606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.735965  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:41.737511  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:40.280036  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:42.777915  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:41.775526  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:15:41.792000  361093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:15:41.824077  361093 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:15:41.837796  361093 system_pods.go:59] 8 kube-system pods found
	I0229 02:15:41.837831  361093 system_pods.go:61] "coredns-5dd5756b68-jg9n5" [138dcd77-9fb3-4537-9459-87349af766d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:15:41.837839  361093 system_pods.go:61] "etcd-embed-certs-665766" [039cfea9-3fcf-4a51-85b9-63c0977c701f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:15:41.837847  361093 system_pods.go:61] "kube-apiserver-embed-certs-665766" [6cb7255e-9e43-4b01-a138-34734a11139b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:15:41.837854  361093 system_pods.go:61] "kube-controller-manager-embed-certs-665766" [aa50c4f2-0528-4366-bc5c-4b625ddbb3cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:15:41.837862  361093 system_pods.go:61] "kube-proxy-xctbw" [ab0177e6-72c5-4bdf-a6b4-fa28d0a500eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:15:41.837867  361093 system_pods.go:61] "kube-scheduler-embed-certs-665766" [0013ea0f-3fa3-426e-8e0f-709889bb7239] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:15:41.837873  361093 system_pods.go:61] "metrics-server-57f55c9bc5-9sdkl" [5d0edfb3-db05-4877-b2e1-b7dda944ee2e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:15:41.837878  361093 system_pods.go:61] "storage-provisioner" [1bfb386b-a55e-47c2-873c-894fb156094f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:15:41.837885  361093 system_pods.go:74] duration metric: took 13.782999ms to wait for pod list to return data ...
	I0229 02:15:41.837894  361093 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:15:41.846499  361093 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:15:41.846534  361093 node_conditions.go:123] node cpu capacity is 2
	I0229 02:15:41.846549  361093 node_conditions.go:105] duration metric: took 8.649228ms to run NodePressure ...
	I0229 02:15:41.846602  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:42.233849  361093 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:15:42.244135  361093 kubeadm.go:787] kubelet initialised
	I0229 02:15:42.244157  361093 kubeadm.go:788] duration metric: took 10.283459ms waiting for restarted kubelet to initialise ...
	I0229 02:15:42.244165  361093 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:15:42.251055  361093 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:44.258993  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:42.673016  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.173406  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.672843  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.173068  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.673562  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.172977  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.673254  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.172757  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.672796  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:47.173606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.738332  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:46.236882  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:44.778794  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:47.278336  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:46.760126  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:48.761905  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:47.673527  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.173283  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.673578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:48.673686  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:48.735531  360776 cri.go:89] found id: ""
	I0229 02:15:48.735560  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.735572  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:48.735580  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:48.735665  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:48.777775  360776 cri.go:89] found id: ""
	I0229 02:15:48.777801  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.777812  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:48.777819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:48.777893  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:48.816348  360776 cri.go:89] found id: ""
	I0229 02:15:48.816382  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.816391  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:48.816398  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:48.816466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:48.856576  360776 cri.go:89] found id: ""
	I0229 02:15:48.856627  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.856640  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:48.856648  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:48.856712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:48.896298  360776 cri.go:89] found id: ""
	I0229 02:15:48.896325  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.896333  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:48.896339  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:48.896419  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:48.939474  360776 cri.go:89] found id: ""
	I0229 02:15:48.939523  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.939537  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:48.939545  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:48.939609  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:48.979602  360776 cri.go:89] found id: ""
	I0229 02:15:48.979630  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.979642  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:48.979649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:48.979734  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:49.020455  360776 cri.go:89] found id: ""
	I0229 02:15:49.020485  360776 logs.go:276] 0 containers: []
	W0229 02:15:49.020495  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:49.020505  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:49.020517  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:49.070608  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:49.070653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:49.086878  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:49.086913  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:49.222506  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:49.222532  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:49.222565  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:49.261476  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:49.261507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:51.812576  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:51.828566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:51.828628  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:51.867885  360776 cri.go:89] found id: ""
	I0229 02:15:51.867913  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.867922  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:51.867928  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:51.867999  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:51.910828  360776 cri.go:89] found id: ""
	I0229 02:15:51.910862  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.910872  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:51.910879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:51.910928  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:51.951547  360776 cri.go:89] found id: ""
	I0229 02:15:51.951578  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.951590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:51.951598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:51.951683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:51.992485  360776 cri.go:89] found id: ""
	I0229 02:15:51.992511  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.992519  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:51.992525  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:51.992579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:52.036445  360776 cri.go:89] found id: ""
	I0229 02:15:52.036481  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.036494  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:52.036502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:52.036567  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:52.075247  360776 cri.go:89] found id: ""
	I0229 02:15:52.075279  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.075289  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:52.075298  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:52.075379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:52.117468  360776 cri.go:89] found id: ""
	I0229 02:15:52.117498  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.117507  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:52.117513  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:52.117575  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:52.156923  360776 cri.go:89] found id: ""
	I0229 02:15:52.156953  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.156962  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:52.156972  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:52.156984  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:52.209140  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:52.209181  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:52.224877  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:52.224952  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:52.313049  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:52.313079  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:52.313096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:48.237478  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:50.737111  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.737652  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:49.777365  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:51.778542  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:51.260945  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.758125  361093 pod_ready.go:92] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:52.758156  361093 pod_ready.go:81] duration metric: took 10.507075504s waiting for pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:52.758168  361093 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:54.767738  361093 pod_ready.go:102] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.361468  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:52.361520  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:54.934192  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:54.950604  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:54.950673  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:54.997665  360776 cri.go:89] found id: ""
	I0229 02:15:54.997700  360776 logs.go:276] 0 containers: []
	W0229 02:15:54.997713  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:54.997738  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:54.997824  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:55.043835  360776 cri.go:89] found id: ""
	I0229 02:15:55.043865  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.043878  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:55.043885  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:55.043952  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:55.084745  360776 cri.go:89] found id: ""
	I0229 02:15:55.084773  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.084784  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:55.084793  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:55.084857  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:55.126607  360776 cri.go:89] found id: ""
	I0229 02:15:55.126638  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.126652  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:55.126660  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:55.126723  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:55.168954  360776 cri.go:89] found id: ""
	I0229 02:15:55.168984  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.168997  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:55.169004  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:55.169068  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:55.209769  360776 cri.go:89] found id: ""
	I0229 02:15:55.209802  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.209813  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:55.209819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:55.209874  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:55.252174  360776 cri.go:89] found id: ""
	I0229 02:15:55.252206  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.252218  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:55.252226  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:55.252280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:55.301449  360776 cri.go:89] found id: ""
	I0229 02:15:55.301483  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.301496  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:55.301508  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:55.301524  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:55.406764  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:55.406785  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:55.406810  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:55.450166  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:55.450213  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:55.499652  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:55.499703  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:55.548616  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:55.548665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:54.738939  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:57.236199  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:54.278386  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:56.779465  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:55.767698  361093 pod_ready.go:92] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.767724  361093 pod_ready.go:81] duration metric: took 3.009548645s waiting for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.767733  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.777263  361093 pod_ready.go:92] pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.777303  361093 pod_ready.go:81] duration metric: took 9.561735ms waiting for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.777315  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.785388  361093 pod_ready.go:92] pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.785410  361093 pod_ready.go:81] duration metric: took 8.086257ms waiting for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.785420  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xctbw" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.791419  361093 pod_ready.go:92] pod "kube-proxy-xctbw" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.791437  361093 pod_ready.go:81] duration metric: took 6.009783ms waiting for pod "kube-proxy-xctbw" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.791448  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:56.799602  361093 pod_ready.go:92] pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:56.799631  361093 pod_ready.go:81] duration metric: took 1.008175236s waiting for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:56.799644  361093 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:58.807838  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:58.064634  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:58.080287  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:58.080365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:58.119448  360776 cri.go:89] found id: ""
	I0229 02:15:58.119480  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.119492  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:58.119500  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:58.119563  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:58.159896  360776 cri.go:89] found id: ""
	I0229 02:15:58.159926  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.159937  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:58.159945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:58.160009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:58.197746  360776 cri.go:89] found id: ""
	I0229 02:15:58.197774  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.197785  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:58.197794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:58.197873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:58.242003  360776 cri.go:89] found id: ""
	I0229 02:15:58.242031  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.242043  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:58.242051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:58.242143  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:58.282762  360776 cri.go:89] found id: ""
	I0229 02:15:58.282795  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.282815  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:58.282823  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:58.282889  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:58.324333  360776 cri.go:89] found id: ""
	I0229 02:15:58.324364  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.324374  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:58.324380  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:58.324436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:58.392279  360776 cri.go:89] found id: ""
	I0229 02:15:58.392308  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.392321  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:58.392329  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:58.392390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:58.448147  360776 cri.go:89] found id: ""
	I0229 02:15:58.448181  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.448194  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:58.448211  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:58.448259  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:58.501620  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:58.501657  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:58.519453  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:58.519486  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:58.595868  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:58.595897  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:58.595917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:58.630969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:58.631004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.181602  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:01.196379  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:01.196456  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:01.237984  360776 cri.go:89] found id: ""
	I0229 02:16:01.238008  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.238019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:01.238028  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:01.238109  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:01.284709  360776 cri.go:89] found id: ""
	I0229 02:16:01.284737  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.284748  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:01.284756  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:01.284829  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:01.328675  360776 cri.go:89] found id: ""
	I0229 02:16:01.328711  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.328724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:01.328732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:01.328787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:01.384088  360776 cri.go:89] found id: ""
	I0229 02:16:01.384118  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.384127  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:01.384133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:01.384182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:01.444582  360776 cri.go:89] found id: ""
	I0229 02:16:01.444617  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.444630  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:01.444638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:01.444709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:01.483202  360776 cri.go:89] found id: ""
	I0229 02:16:01.483237  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.483250  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:01.483258  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:01.483327  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:01.520422  360776 cri.go:89] found id: ""
	I0229 02:16:01.520455  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.520467  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:01.520475  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:01.520545  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:01.558295  360776 cri.go:89] found id: ""
	I0229 02:16:01.558327  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.558336  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:01.558348  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:01.558363  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:01.594473  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:01.594508  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.640865  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:01.640906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:01.691693  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:01.691746  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:01.708474  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:01.708507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:01.788334  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:59.237127  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.237269  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:59.278029  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.278662  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:03.280874  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.309386  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:03.807534  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:04.288565  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:04.304344  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:04.304435  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:04.364586  360776 cri.go:89] found id: ""
	I0229 02:16:04.364623  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.364635  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:04.364643  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:04.364712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:04.423593  360776 cri.go:89] found id: ""
	I0229 02:16:04.423624  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.423637  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:04.423646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:04.423715  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:04.463437  360776 cri.go:89] found id: ""
	I0229 02:16:04.463471  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.463482  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:04.463491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:04.463553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:04.500526  360776 cri.go:89] found id: ""
	I0229 02:16:04.500550  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.500559  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:04.500565  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:04.500646  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:04.541324  360776 cri.go:89] found id: ""
	I0229 02:16:04.541363  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.541376  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:04.541389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:04.541466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:04.586036  360776 cri.go:89] found id: ""
	I0229 02:16:04.586063  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.586071  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:04.586093  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:04.586221  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:04.624838  360776 cri.go:89] found id: ""
	I0229 02:16:04.624864  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.624873  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:04.624879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:04.624942  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:04.665188  360776 cri.go:89] found id: ""
	I0229 02:16:04.665214  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.665223  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:04.665235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:04.665248  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:04.710572  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:04.710608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:04.759440  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:04.759473  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:04.777220  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:04.777252  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:04.855773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:04.855802  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:04.855820  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:03.736436  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:06.238443  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:05.779438  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:08.279021  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:05.808060  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:08.307721  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:07.391235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:07.407347  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:07.407424  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:07.456950  360776 cri.go:89] found id: ""
	I0229 02:16:07.456978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.456988  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:07.456994  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:07.457056  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:07.501947  360776 cri.go:89] found id: ""
	I0229 02:16:07.501978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.501989  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:07.501996  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:07.502055  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:07.543248  360776 cri.go:89] found id: ""
	I0229 02:16:07.543283  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.543296  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:07.543303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:07.543369  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:07.580554  360776 cri.go:89] found id: ""
	I0229 02:16:07.580587  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.580599  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:07.580606  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:07.580674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:07.618930  360776 cri.go:89] found id: ""
	I0229 02:16:07.618955  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.618966  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:07.618974  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:07.619038  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:07.656206  360776 cri.go:89] found id: ""
	I0229 02:16:07.656237  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.656246  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:07.656252  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:07.656312  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:07.692225  360776 cri.go:89] found id: ""
	I0229 02:16:07.692255  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.692266  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:07.692273  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:07.692334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:07.728085  360776 cri.go:89] found id: ""
	I0229 02:16:07.728118  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.728130  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:07.728143  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:07.728161  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:07.744078  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:07.744102  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:07.819861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:07.819891  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:07.819906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:07.854665  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:07.854694  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:07.899029  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:07.899059  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.449274  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:10.466228  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:10.466305  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:10.516655  360776 cri.go:89] found id: ""
	I0229 02:16:10.516686  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.516699  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:10.516707  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:10.516776  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:10.551194  360776 cri.go:89] found id: ""
	I0229 02:16:10.551222  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.551240  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:10.551247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:10.551309  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:10.586984  360776 cri.go:89] found id: ""
	I0229 02:16:10.587012  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.587021  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:10.587033  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:10.587101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:10.631726  360776 cri.go:89] found id: ""
	I0229 02:16:10.631758  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.631768  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:10.631775  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:10.631831  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:10.673054  360776 cri.go:89] found id: ""
	I0229 02:16:10.673090  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.673102  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:10.673110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:10.673175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:10.716401  360776 cri.go:89] found id: ""
	I0229 02:16:10.716428  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.716437  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:10.716448  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:10.716495  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:10.762425  360776 cri.go:89] found id: ""
	I0229 02:16:10.762451  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.762460  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:10.762465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:10.762523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:10.800934  360776 cri.go:89] found id: ""
	I0229 02:16:10.800959  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.800970  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:10.800981  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:10.800995  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.851152  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:10.851178  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:10.865410  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:10.865436  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:10.941654  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:10.941679  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:10.941699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:10.977068  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:10.977099  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:08.736174  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.738304  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.779517  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:13.277888  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.308754  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:12.807138  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:14.807518  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:13.524032  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:13.540646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:13.540721  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:13.584696  360776 cri.go:89] found id: ""
	I0229 02:16:13.584727  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.584740  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:13.584748  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:13.584819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:13.620800  360776 cri.go:89] found id: ""
	I0229 02:16:13.620843  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.620852  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:13.620858  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:13.620936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:13.659179  360776 cri.go:89] found id: ""
	I0229 02:16:13.659209  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.659218  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:13.659224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:13.659286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:13.695772  360776 cri.go:89] found id: ""
	I0229 02:16:13.695821  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.695832  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:13.695840  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:13.695902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:13.736870  360776 cri.go:89] found id: ""
	I0229 02:16:13.736895  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.736906  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:13.736913  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:13.736978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:13.782101  360776 cri.go:89] found id: ""
	I0229 02:16:13.782131  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.782143  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:13.782151  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:13.782212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:13.822638  360776 cri.go:89] found id: ""
	I0229 02:16:13.822663  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.822672  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:13.822677  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:13.822741  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:13.861761  360776 cri.go:89] found id: ""
	I0229 02:16:13.861787  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.861798  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:13.861811  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:13.861835  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:13.877464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:13.877494  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:13.955485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:13.955512  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:13.955525  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:13.990560  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:13.990594  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:14.037740  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:14.037780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:16.588097  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:16.603732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:16.603810  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:16.644337  360776 cri.go:89] found id: ""
	I0229 02:16:16.644372  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.644393  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:16.644404  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:16.644474  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:16.687530  360776 cri.go:89] found id: ""
	I0229 02:16:16.687562  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.687575  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:16.687584  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:16.687653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:16.728007  360776 cri.go:89] found id: ""
	I0229 02:16:16.728037  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.728054  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:16.728063  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:16.728125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:16.770904  360776 cri.go:89] found id: ""
	I0229 02:16:16.770952  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.770964  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:16.770973  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:16.771041  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:16.812270  360776 cri.go:89] found id: ""
	I0229 02:16:16.812294  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.812303  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:16.812309  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:16.812358  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:16.854461  360776 cri.go:89] found id: ""
	I0229 02:16:16.854487  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.854495  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:16.854502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:16.854565  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:16.893048  360776 cri.go:89] found id: ""
	I0229 02:16:16.893081  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.893093  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:16.893102  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:16.893175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:16.934533  360776 cri.go:89] found id: ""
	I0229 02:16:16.934565  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.934576  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:16.934589  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:16.934608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:16.949773  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:16.949806  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:17.030457  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:17.030483  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:17.030500  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:17.066911  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:17.066947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:17.141648  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:17.141680  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:13.236967  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:15.736473  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:15.278216  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:17.280028  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:17.307756  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.308255  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.697967  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:19.713729  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:19.713786  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:19.757898  360776 cri.go:89] found id: ""
	I0229 02:16:19.757929  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.757940  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:19.757947  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:19.757998  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:19.807621  360776 cri.go:89] found id: ""
	I0229 02:16:19.807644  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.807652  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:19.807658  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:19.807704  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:19.846030  360776 cri.go:89] found id: ""
	I0229 02:16:19.846060  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.846071  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:19.846089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:19.846157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:19.881842  360776 cri.go:89] found id: ""
	I0229 02:16:19.881870  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.881883  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:19.881892  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:19.881955  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:19.917791  360776 cri.go:89] found id: ""
	I0229 02:16:19.917818  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.917830  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:19.917837  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:19.917922  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:19.954147  360776 cri.go:89] found id: ""
	I0229 02:16:19.954174  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.954186  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:19.954194  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:19.954259  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:19.991466  360776 cri.go:89] found id: ""
	I0229 02:16:19.991495  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.991505  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:19.991512  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:19.991566  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:20.032484  360776 cri.go:89] found id: ""
	I0229 02:16:20.032515  360776 logs.go:276] 0 containers: []
	W0229 02:16:20.032526  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:20.032540  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:20.032556  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:20.084743  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:20.084781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:20.105586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:20.105626  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:20.206486  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:20.206513  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:20.206528  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:20.250720  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:20.250748  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:18.235820  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:20.235852  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.237011  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.779151  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.278930  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:21.808852  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:24.307883  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.796158  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:22.812126  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:22.812208  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:22.849744  360776 cri.go:89] found id: ""
	I0229 02:16:22.849776  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.849792  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:22.849800  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:22.849865  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:22.891875  360776 cri.go:89] found id: ""
	I0229 02:16:22.891909  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.891921  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:22.891930  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:22.891995  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:22.931754  360776 cri.go:89] found id: ""
	I0229 02:16:22.931789  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.931801  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:22.931809  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:22.931878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:22.979291  360776 cri.go:89] found id: ""
	I0229 02:16:22.979322  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.979340  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:22.979349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:22.979437  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:23.028390  360776 cri.go:89] found id: ""
	I0229 02:16:23.028416  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.028424  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:23.028430  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:23.028498  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:23.077140  360776 cri.go:89] found id: ""
	I0229 02:16:23.077174  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.077187  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:23.077202  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:23.077274  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:23.124275  360776 cri.go:89] found id: ""
	I0229 02:16:23.124316  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.124326  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:23.124333  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:23.124386  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:23.188748  360776 cri.go:89] found id: ""
	I0229 02:16:23.188789  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.188801  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:23.188815  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:23.188833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:23.247833  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:23.247863  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:23.263866  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:23.263891  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:23.347825  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:23.347851  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:23.347869  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:23.383517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:23.383549  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:25.925662  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:25.940548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:25.940604  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:25.977087  360776 cri.go:89] found id: ""
	I0229 02:16:25.977107  360776 logs.go:276] 0 containers: []
	W0229 02:16:25.977116  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:25.977149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:25.977230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:26.018569  360776 cri.go:89] found id: ""
	I0229 02:16:26.018602  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.018615  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:26.018623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:26.018682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:26.057726  360776 cri.go:89] found id: ""
	I0229 02:16:26.057754  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.057773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:26.057782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:26.057838  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:26.097203  360776 cri.go:89] found id: ""
	I0229 02:16:26.097234  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.097247  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:26.097256  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:26.097322  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:26.141897  360776 cri.go:89] found id: ""
	I0229 02:16:26.141925  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.141941  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:26.141948  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:26.142009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:26.195074  360776 cri.go:89] found id: ""
	I0229 02:16:26.195101  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.195110  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:26.195117  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:26.195176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:26.252131  360776 cri.go:89] found id: ""
	I0229 02:16:26.252158  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.252166  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:26.252172  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:26.252249  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:26.292730  360776 cri.go:89] found id: ""
	I0229 02:16:26.292752  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.292760  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:26.292770  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:26.292781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:26.375138  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:26.375165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:26.375182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:26.410167  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:26.410196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:26.453622  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:26.453665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:26.503732  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:26.503762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:24.740152  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:27.236389  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:24.777323  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:26.778399  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:28.779480  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:26.308285  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:28.806555  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:29.018838  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:29.034894  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:29.034963  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:29.086433  360776 cri.go:89] found id: ""
	I0229 02:16:29.086460  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.086472  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:29.086481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:29.086562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:29.134575  360776 cri.go:89] found id: ""
	I0229 02:16:29.134606  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.134619  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:29.134627  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:29.134701  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:29.186372  360776 cri.go:89] found id: ""
	I0229 02:16:29.186408  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.186420  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:29.186427  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:29.186481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:29.236276  360776 cri.go:89] found id: ""
	I0229 02:16:29.236299  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.236306  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:29.236312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:29.236361  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:29.280342  360776 cri.go:89] found id: ""
	I0229 02:16:29.280371  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.280380  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:29.280389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:29.280461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:29.325017  360776 cri.go:89] found id: ""
	I0229 02:16:29.325047  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.325059  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:29.325068  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:29.325139  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:29.367912  360776 cri.go:89] found id: ""
	I0229 02:16:29.367941  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.367951  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:29.367957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:29.368021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:29.404499  360776 cri.go:89] found id: ""
	I0229 02:16:29.404528  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.404538  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:29.404548  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:29.404562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:29.419724  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:29.419755  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:29.501923  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:29.501952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:29.501971  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:29.536724  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:29.536762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:29.579709  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:29.579744  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.129825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:32.147723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:32.147815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:32.206978  360776 cri.go:89] found id: ""
	I0229 02:16:32.207016  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.207030  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:32.207041  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:32.207140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:32.265296  360776 cri.go:89] found id: ""
	I0229 02:16:32.265328  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.265341  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:32.265350  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:32.265418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:32.312827  360776 cri.go:89] found id: ""
	I0229 02:16:32.312862  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.312874  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:32.312882  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:32.312946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:29.736263  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.238217  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:31.277342  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:33.279528  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:30.806969  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.808795  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.359988  360776 cri.go:89] found id: ""
	I0229 02:16:32.360024  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.360036  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:32.360045  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:32.360106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:32.400969  360776 cri.go:89] found id: ""
	I0229 02:16:32.401003  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.401015  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:32.401022  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:32.401075  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:32.437371  360776 cri.go:89] found id: ""
	I0229 02:16:32.437402  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.437411  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:32.437419  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:32.437491  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:32.481199  360776 cri.go:89] found id: ""
	I0229 02:16:32.481227  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.481238  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:32.481247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:32.481329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:32.528100  360776 cri.go:89] found id: ""
	I0229 02:16:32.528137  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.528150  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:32.528163  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:32.528180  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:32.565087  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:32.565122  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:32.616350  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:32.616382  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.669978  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:32.670015  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:32.684373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:32.684399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:32.769992  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:35.270148  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:35.289949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:35.290050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:35.334051  360776 cri.go:89] found id: ""
	I0229 02:16:35.334091  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.334103  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:35.334112  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:35.334170  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:35.378536  360776 cri.go:89] found id: ""
	I0229 02:16:35.378571  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.378585  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:35.378594  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:35.378660  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:35.417867  360776 cri.go:89] found id: ""
	I0229 02:16:35.417894  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.417905  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:35.417914  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:35.417982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:35.455848  360776 cri.go:89] found id: ""
	I0229 02:16:35.455874  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.455887  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:35.455896  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:35.455964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:35.494787  360776 cri.go:89] found id: ""
	I0229 02:16:35.494814  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.494822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:35.494828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:35.494890  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:35.533553  360776 cri.go:89] found id: ""
	I0229 02:16:35.533583  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.533592  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:35.533600  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:35.533669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:35.581381  360776 cri.go:89] found id: ""
	I0229 02:16:35.581412  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.581422  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:35.581429  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:35.581494  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:35.619128  360776 cri.go:89] found id: ""
	I0229 02:16:35.619158  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.619169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:35.619181  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:35.619197  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:35.655180  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:35.655216  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:35.701558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:35.701585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:35.753639  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:35.753672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:35.769711  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:35.769743  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:35.843861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:34.735895  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:36.736525  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:35.280004  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:37.778345  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:35.308212  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:37.807970  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:38.345063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:38.361259  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:38.361345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:38.405901  360776 cri.go:89] found id: ""
	I0229 02:16:38.405936  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.405949  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:38.405958  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:38.406027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:38.447860  360776 cri.go:89] found id: ""
	I0229 02:16:38.447894  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.447907  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:38.447915  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:38.447983  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:38.489711  360776 cri.go:89] found id: ""
	I0229 02:16:38.489737  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.489746  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:38.489752  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:38.489815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:38.527094  360776 cri.go:89] found id: ""
	I0229 02:16:38.527120  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.527128  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:38.527135  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:38.527202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:38.564125  360776 cri.go:89] found id: ""
	I0229 02:16:38.564165  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.564175  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:38.564183  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:38.564257  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:38.604355  360776 cri.go:89] found id: ""
	I0229 02:16:38.604385  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.604394  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:38.604401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:38.604471  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:38.642291  360776 cri.go:89] found id: ""
	I0229 02:16:38.642329  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.642338  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:38.642345  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:38.642425  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:38.684559  360776 cri.go:89] found id: ""
	I0229 02:16:38.684605  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.684617  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:38.684632  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:38.684646  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:38.735189  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:38.735230  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:38.750359  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:38.750388  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:38.832749  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:38.832777  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:38.832793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:38.871321  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:38.871355  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.429960  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:41.445002  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:41.445081  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:41.487833  360776 cri.go:89] found id: ""
	I0229 02:16:41.487867  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.487880  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:41.487889  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:41.487953  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:41.527667  360776 cri.go:89] found id: ""
	I0229 02:16:41.527691  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.527700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:41.527706  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:41.527767  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:41.568252  360776 cri.go:89] found id: ""
	I0229 02:16:41.568279  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.568289  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:41.568295  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:41.568347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:41.606664  360776 cri.go:89] found id: ""
	I0229 02:16:41.606697  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.606709  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:41.606717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:41.606787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:41.643384  360776 cri.go:89] found id: ""
	I0229 02:16:41.643413  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.643425  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:41.643433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:41.643488  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:41.685132  360776 cri.go:89] found id: ""
	I0229 02:16:41.685165  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.685179  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:41.685188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:41.685255  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:41.725844  360776 cri.go:89] found id: ""
	I0229 02:16:41.725874  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.725888  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:41.725901  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:41.725959  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:41.764651  360776 cri.go:89] found id: ""
	I0229 02:16:41.764684  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.764710  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:41.764728  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:41.764745  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:41.846499  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:41.846520  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:41.846534  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:41.889415  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:41.889454  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.955514  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:41.955554  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:42.011187  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:42.011231  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:38.736997  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:40.737109  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:39.778387  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:41.780284  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:40.308479  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:42.807142  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.808770  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.528746  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:44.544657  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:44.544735  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:44.584593  360776 cri.go:89] found id: ""
	I0229 02:16:44.584619  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.584628  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:44.584634  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:44.584703  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:44.621819  360776 cri.go:89] found id: ""
	I0229 02:16:44.621851  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.621863  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:44.621870  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:44.621936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:44.661908  360776 cri.go:89] found id: ""
	I0229 02:16:44.661939  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.661951  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:44.661959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:44.662042  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:44.703135  360776 cri.go:89] found id: ""
	I0229 02:16:44.703168  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.703179  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:44.703186  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:44.703256  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:44.742783  360776 cri.go:89] found id: ""
	I0229 02:16:44.742812  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.742823  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:44.742831  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:44.742900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:44.786223  360776 cri.go:89] found id: ""
	I0229 02:16:44.786258  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.786271  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:44.786280  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:44.786348  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:44.832269  360776 cri.go:89] found id: ""
	I0229 02:16:44.832295  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.832304  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:44.832312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:44.832371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:44.882497  360776 cri.go:89] found id: ""
	I0229 02:16:44.882529  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.882541  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:44.882554  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:44.882572  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:44.898452  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:44.898484  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:44.988062  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:44.988089  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:44.988106  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:45.025317  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:45.025353  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:45.069804  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:45.069843  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:43.236422  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:45.236874  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:47.238514  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.277544  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:46.279502  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:48.280224  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:46.809509  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:49.307555  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:47.621890  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:47.636506  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:47.636572  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:47.679975  360776 cri.go:89] found id: ""
	I0229 02:16:47.680007  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.680019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:47.680026  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:47.680099  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:47.720573  360776 cri.go:89] found id: ""
	I0229 02:16:47.720604  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.720616  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:47.720628  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:47.720693  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:47.762211  360776 cri.go:89] found id: ""
	I0229 02:16:47.762239  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.762256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:47.762264  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:47.762325  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:47.801703  360776 cri.go:89] found id: ""
	I0229 02:16:47.801726  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.801736  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:47.801745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:47.801804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:47.843036  360776 cri.go:89] found id: ""
	I0229 02:16:47.843065  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.843074  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:47.843087  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:47.843137  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:47.901986  360776 cri.go:89] found id: ""
	I0229 02:16:47.902016  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.902029  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:47.902037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:47.902115  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:47.970578  360776 cri.go:89] found id: ""
	I0229 02:16:47.970626  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.970638  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:47.970646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:47.970727  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:48.008245  360776 cri.go:89] found id: ""
	I0229 02:16:48.008280  360776 logs.go:276] 0 containers: []
	W0229 02:16:48.008290  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:48.008303  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:48.008318  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:48.059243  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:48.059277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:48.109287  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:48.109328  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:48.124720  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:48.124747  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:48.201686  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:48.201734  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:48.201750  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:50.740237  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:50.755100  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:50.755174  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:50.799284  360776 cri.go:89] found id: ""
	I0229 02:16:50.799304  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.799312  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:50.799318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:50.799367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:50.863582  360776 cri.go:89] found id: ""
	I0229 02:16:50.863617  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.863630  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:50.863638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:50.863709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:50.913067  360776 cri.go:89] found id: ""
	I0229 02:16:50.913097  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.913107  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:50.913114  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:50.913181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:50.964343  360776 cri.go:89] found id: ""
	I0229 02:16:50.964372  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.964381  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:50.964387  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:50.964443  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:51.008180  360776 cri.go:89] found id: ""
	I0229 02:16:51.008215  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.008226  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:51.008234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:51.008314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:51.050574  360776 cri.go:89] found id: ""
	I0229 02:16:51.050604  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.050613  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:51.050619  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:51.050682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:51.094144  360776 cri.go:89] found id: ""
	I0229 02:16:51.094170  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.094180  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:51.094187  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:51.094254  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:51.133928  360776 cri.go:89] found id: ""
	I0229 02:16:51.133963  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.133976  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:51.133989  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:51.134005  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:51.169857  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:51.169888  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:51.211739  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:51.211774  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:51.267237  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:51.267277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:51.285167  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:51.285200  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:51.361051  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:49.736852  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:52.235969  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:50.781150  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.277926  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:51.307606  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.308568  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.861859  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:53.879047  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:53.879124  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:53.931722  360776 cri.go:89] found id: ""
	I0229 02:16:53.931751  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.931761  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:53.931770  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:53.931843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:53.989223  360776 cri.go:89] found id: ""
	I0229 02:16:53.989250  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.989259  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:53.989266  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:53.989316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:54.029340  360776 cri.go:89] found id: ""
	I0229 02:16:54.029367  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.029379  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:54.029394  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:54.029455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:54.065032  360776 cri.go:89] found id: ""
	I0229 02:16:54.065061  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.065072  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:54.065081  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:54.065148  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:54.103739  360776 cri.go:89] found id: ""
	I0229 02:16:54.103771  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.103783  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:54.103791  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:54.103886  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:54.146653  360776 cri.go:89] found id: ""
	I0229 02:16:54.146706  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.146720  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:54.146728  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:54.146804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:54.183885  360776 cri.go:89] found id: ""
	I0229 02:16:54.183909  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.183917  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:54.183923  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:54.183985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:54.223712  360776 cri.go:89] found id: ""
	I0229 02:16:54.223739  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.223748  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:54.223758  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:54.223776  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:54.239418  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:54.239443  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:54.316236  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:54.316262  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:54.316278  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:54.351899  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:54.351933  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:54.396954  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:54.396990  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:56.949058  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:56.965888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:56.965966  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:57.010067  360776 cri.go:89] found id: ""
	I0229 02:16:57.010114  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.010127  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:57.010136  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:57.010199  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:57.048082  360776 cri.go:89] found id: ""
	I0229 02:16:57.048108  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.048116  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:57.048123  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:57.048172  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:57.082859  360776 cri.go:89] found id: ""
	I0229 02:16:57.082890  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.082903  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:57.082910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:57.082971  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:57.118291  360776 cri.go:89] found id: ""
	I0229 02:16:57.118321  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.118331  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:57.118338  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:57.118396  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:57.155920  360776 cri.go:89] found id: ""
	I0229 02:16:57.155945  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.155954  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:57.155960  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:57.156007  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:57.198460  360776 cri.go:89] found id: ""
	I0229 02:16:57.198494  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.198503  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:57.198515  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:57.198576  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:57.239178  360776 cri.go:89] found id: ""
	I0229 02:16:57.239206  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.239214  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:57.239220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:57.239267  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:57.280933  360776 cri.go:89] found id: ""
	I0229 02:16:57.280964  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.280977  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:57.280988  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:57.281004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:57.341023  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:57.341056  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:54.237542  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:56.736019  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:55.778328  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:58.281018  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:55.309863  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:57.311910  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:59.807723  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:57.356053  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:57.356083  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:57.435017  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:57.435040  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:57.435057  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:57.472428  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:57.472461  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:00.020707  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:00.035406  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:00.035476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:00.072190  360776 cri.go:89] found id: ""
	I0229 02:17:00.072222  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.072231  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:00.072237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:00.072289  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:00.108829  360776 cri.go:89] found id: ""
	I0229 02:17:00.108857  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.108868  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:00.108875  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:00.108927  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:00.143429  360776 cri.go:89] found id: ""
	I0229 02:17:00.143450  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.143459  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:00.143465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:00.143512  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:00.180428  360776 cri.go:89] found id: ""
	I0229 02:17:00.180456  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.180467  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:00.180496  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:00.180564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:00.220115  360776 cri.go:89] found id: ""
	I0229 02:17:00.220143  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.220155  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:00.220163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:00.220220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:00.258851  360776 cri.go:89] found id: ""
	I0229 02:17:00.258877  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.258887  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:00.258895  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:00.258982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:00.304148  360776 cri.go:89] found id: ""
	I0229 02:17:00.304174  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.304185  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:00.304193  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:00.304277  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:00.345893  360776 cri.go:89] found id: ""
	I0229 02:17:00.345923  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.345935  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:00.345950  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:00.345965  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:00.395977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:00.396006  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:00.410948  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:00.410970  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:00.485724  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:00.485745  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:00.485760  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:00.520496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:00.520531  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:59.236302  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:01.237806  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:00.777736  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.280794  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:01.807808  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.818535  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.065669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:03.081434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:03.081496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:03.118752  360776 cri.go:89] found id: ""
	I0229 02:17:03.118779  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.118788  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:03.118794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:03.118870  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:03.156172  360776 cri.go:89] found id: ""
	I0229 02:17:03.156197  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.156209  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:03.156216  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:03.156285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:03.190792  360776 cri.go:89] found id: ""
	I0229 02:17:03.190815  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.190823  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:03.190829  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:03.190885  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:03.229692  360776 cri.go:89] found id: ""
	I0229 02:17:03.229721  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.229733  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:03.229741  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:03.229800  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:03.271014  360776 cri.go:89] found id: ""
	I0229 02:17:03.271044  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.271053  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:03.271058  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:03.271118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:03.315291  360776 cri.go:89] found id: ""
	I0229 02:17:03.315316  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.315325  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:03.315332  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:03.315390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:03.354974  360776 cri.go:89] found id: ""
	I0229 02:17:03.354998  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.355007  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:03.355014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:03.355091  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:03.394044  360776 cri.go:89] found id: ""
	I0229 02:17:03.394074  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.394101  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:03.394120  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:03.394138  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:03.430131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:03.430164  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.472760  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:03.472793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:03.522797  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:03.522837  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:03.538642  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:03.538672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:03.611189  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.112319  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:06.126843  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:06.126924  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:06.171970  360776 cri.go:89] found id: ""
	I0229 02:17:06.171995  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.172005  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:06.172011  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:06.172060  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:06.208082  360776 cri.go:89] found id: ""
	I0229 02:17:06.208114  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.208126  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:06.208133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:06.208211  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:06.246429  360776 cri.go:89] found id: ""
	I0229 02:17:06.246454  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.246465  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:06.246472  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:06.246521  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:06.286908  360776 cri.go:89] found id: ""
	I0229 02:17:06.286941  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.286952  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:06.286959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:06.287036  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:06.330632  360776 cri.go:89] found id: ""
	I0229 02:17:06.330664  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.330707  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:06.330720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:06.330793  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:06.368385  360776 cri.go:89] found id: ""
	I0229 02:17:06.368412  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.368423  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:06.368431  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:06.368499  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:06.407424  360776 cri.go:89] found id: ""
	I0229 02:17:06.407456  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.407468  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:06.407476  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:06.407542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:06.447043  360776 cri.go:89] found id: ""
	I0229 02:17:06.447072  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.447084  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:06.447098  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:06.447119  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:06.501604  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:06.501639  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:06.516247  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:06.516274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:06.593087  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.593112  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:06.593126  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:06.633057  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:06.633097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.735552  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:05.735757  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:07.736746  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:05.777670  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:07.779116  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:06.308986  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:08.808349  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:09.202624  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:09.218424  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:09.218496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:09.264508  360776 cri.go:89] found id: ""
	I0229 02:17:09.264538  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.264551  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:09.264560  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:09.264652  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:09.304507  360776 cri.go:89] found id: ""
	I0229 02:17:09.304536  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.304547  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:09.304555  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:09.304619  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:09.354779  360776 cri.go:89] found id: ""
	I0229 02:17:09.354802  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.354811  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:09.354817  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:09.354866  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:09.390031  360776 cri.go:89] found id: ""
	I0229 02:17:09.390065  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.390097  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:09.390106  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:09.390182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:09.435618  360776 cri.go:89] found id: ""
	I0229 02:17:09.435652  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.435666  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:09.435674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:09.435757  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:09.479110  360776 cri.go:89] found id: ""
	I0229 02:17:09.479142  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.479154  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:09.479163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:09.479236  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:09.520748  360776 cri.go:89] found id: ""
	I0229 02:17:09.520781  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.520794  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:09.520802  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:09.520879  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:09.561536  360776 cri.go:89] found id: ""
	I0229 02:17:09.561576  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.561590  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:09.561611  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:09.561628  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:09.621631  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:09.621678  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:09.640562  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:09.640607  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:09.727979  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:09.728001  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:09.728013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:09.766305  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:09.766340  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:12.312841  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:12.329745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:12.329826  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:10.236840  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.736224  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:09.779304  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.277545  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:11.308061  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:13.808929  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.376185  360776 cri.go:89] found id: ""
	I0229 02:17:12.376218  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.376230  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:12.376240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:12.376317  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:12.417025  360776 cri.go:89] found id: ""
	I0229 02:17:12.417059  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.417068  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:12.417080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:12.417162  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:12.458973  360776 cri.go:89] found id: ""
	I0229 02:17:12.459018  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.459040  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:12.459048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:12.459116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:12.500063  360776 cri.go:89] found id: ""
	I0229 02:17:12.500090  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.500102  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:12.500110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:12.500177  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:12.543182  360776 cri.go:89] found id: ""
	I0229 02:17:12.543213  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.543225  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:12.543234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:12.543296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:12.584725  360776 cri.go:89] found id: ""
	I0229 02:17:12.584773  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.584796  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:12.584804  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:12.584873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:12.634212  360776 cri.go:89] found id: ""
	I0229 02:17:12.634244  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.634256  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:12.634263  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:12.634330  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:12.686103  360776 cri.go:89] found id: ""
	I0229 02:17:12.686134  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.686144  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:12.686154  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:12.686168  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:12.753950  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:12.753999  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:12.769400  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:12.769430  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:12.856362  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:12.856390  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:12.856408  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:12.893238  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:12.893274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:15.439069  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:15.455698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:15.455779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:15.501222  360776 cri.go:89] found id: ""
	I0229 02:17:15.501248  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.501262  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:15.501269  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:15.501331  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:15.544580  360776 cri.go:89] found id: ""
	I0229 02:17:15.544610  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.544623  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:15.544632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:15.544697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:15.587250  360776 cri.go:89] found id: ""
	I0229 02:17:15.587301  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.587314  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:15.587322  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:15.587392  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:15.660189  360776 cri.go:89] found id: ""
	I0229 02:17:15.660214  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.660223  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:15.660229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:15.660280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:15.715100  360776 cri.go:89] found id: ""
	I0229 02:17:15.715126  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.715136  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:15.715142  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:15.715203  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:15.758998  360776 cri.go:89] found id: ""
	I0229 02:17:15.759028  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.759047  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:15.759053  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:15.759118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:15.801175  360776 cri.go:89] found id: ""
	I0229 02:17:15.801203  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.801215  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:15.801224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:15.801294  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:15.849643  360776 cri.go:89] found id: ""
	I0229 02:17:15.849678  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.849690  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:15.849704  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:15.849724  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:15.864824  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:15.864856  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:15.937271  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:15.937299  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:15.937313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:15.976404  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:15.976448  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:16.025658  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:16.025697  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:15.235863  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:17.237685  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:14.279268  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:16.280226  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.779746  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:16.307548  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.806653  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.574763  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:18.593695  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:18.593802  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:18.641001  360776 cri.go:89] found id: ""
	I0229 02:17:18.641033  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.641042  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:18.641048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:18.641106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:18.701580  360776 cri.go:89] found id: ""
	I0229 02:17:18.701608  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.701617  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:18.701623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:18.701674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:18.742596  360776 cri.go:89] found id: ""
	I0229 02:17:18.742632  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.742642  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:18.742649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:18.742712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:18.782404  360776 cri.go:89] found id: ""
	I0229 02:17:18.782432  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.782443  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:18.782451  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:18.782516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:18.826221  360776 cri.go:89] found id: ""
	I0229 02:17:18.826250  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.826262  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:18.826270  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:18.826354  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:18.864698  360776 cri.go:89] found id: ""
	I0229 02:17:18.864737  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.864746  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:18.864766  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:18.864819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:18.902681  360776 cri.go:89] found id: ""
	I0229 02:17:18.902708  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.902718  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:18.902723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:18.902835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:18.942178  360776 cri.go:89] found id: ""
	I0229 02:17:18.942203  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.942213  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:18.942223  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:18.942236  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:18.983914  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:18.983947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:19.041670  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:19.041710  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:19.057445  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:19.057475  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:19.128946  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:19.128974  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:19.129007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:21.664806  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:21.680938  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:21.681037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:21.737776  360776 cri.go:89] found id: ""
	I0229 02:17:21.737808  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.737825  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:21.737833  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:21.737913  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:21.778917  360776 cri.go:89] found id: ""
	I0229 02:17:21.778951  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.778962  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:21.778969  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:21.779033  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:21.819099  360776 cri.go:89] found id: ""
	I0229 02:17:21.819127  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.819139  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:21.819147  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:21.819230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:21.861290  360776 cri.go:89] found id: ""
	I0229 02:17:21.861323  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.861334  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:21.861342  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:21.861406  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:21.900886  360776 cri.go:89] found id: ""
	I0229 02:17:21.900926  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.900938  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:21.900946  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:21.901021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:21.943023  360776 cri.go:89] found id: ""
	I0229 02:17:21.943060  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.943072  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:21.943080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:21.943145  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:21.984305  360776 cri.go:89] found id: ""
	I0229 02:17:21.984341  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.984352  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:21.984360  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:21.984428  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:22.025326  360776 cri.go:89] found id: ""
	I0229 02:17:22.025356  360776 logs.go:276] 0 containers: []
	W0229 02:17:22.025368  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:22.025382  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:22.025398  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:22.074977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:22.075020  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:22.092483  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:22.092518  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:22.171791  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:22.171814  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:22.171833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:22.211794  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:22.211850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:19.736684  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:21.737510  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:21.278089  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:23.278374  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:20.808574  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:23.307697  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:24.758800  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:24.773418  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:24.773501  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:24.819487  360776 cri.go:89] found id: ""
	I0229 02:17:24.819520  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.819531  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:24.819540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:24.819605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:24.859906  360776 cri.go:89] found id: ""
	I0229 02:17:24.859938  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.859949  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:24.859957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:24.860022  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:24.897499  360776 cri.go:89] found id: ""
	I0229 02:17:24.897531  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.897540  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:24.897547  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:24.897622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:24.935346  360776 cri.go:89] found id: ""
	I0229 02:17:24.935380  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.935393  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:24.935401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:24.935468  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:24.973567  360776 cri.go:89] found id: ""
	I0229 02:17:24.973591  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.973600  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:24.973605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:24.973657  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:25.016166  360776 cri.go:89] found id: ""
	I0229 02:17:25.016198  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.016210  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:25.016217  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:25.016285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:25.059944  360776 cri.go:89] found id: ""
	I0229 02:17:25.059977  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.059991  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:25.059999  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:25.060057  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:25.101594  360776 cri.go:89] found id: ""
	I0229 02:17:25.101627  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.101639  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:25.101652  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:25.101672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:25.183940  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:25.183988  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:25.184007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:25.219286  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:25.219327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:25.267048  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:25.267107  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:25.320969  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:25.320998  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:24.236957  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:26.736244  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:25.278532  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.777655  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:25.308061  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.806994  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.846314  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:27.861349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:27.861416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:27.901126  360776 cri.go:89] found id: ""
	I0229 02:17:27.901153  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.901162  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:27.901169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:27.901220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:27.942692  360776 cri.go:89] found id: ""
	I0229 02:17:27.942725  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.942738  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:27.942745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:27.942803  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:27.978891  360776 cri.go:89] found id: ""
	I0229 02:17:27.978919  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.978928  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:27.978934  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:27.978991  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:28.019688  360776 cri.go:89] found id: ""
	I0229 02:17:28.019723  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.019735  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:28.019743  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:28.019799  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:28.056414  360776 cri.go:89] found id: ""
	I0229 02:17:28.056438  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.056451  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:28.056457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:28.056504  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:28.093691  360776 cri.go:89] found id: ""
	I0229 02:17:28.093727  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.093739  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:28.093747  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:28.093806  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:28.130737  360776 cri.go:89] found id: ""
	I0229 02:17:28.130761  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.130768  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:28.130774  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:28.130828  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:28.167783  360776 cri.go:89] found id: ""
	I0229 02:17:28.167810  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.167820  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:28.167832  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:28.167850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:28.248054  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:28.248080  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:28.248096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:28.284935  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:28.284963  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:28.328563  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:28.328605  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:28.379372  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:28.379412  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:30.896570  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:30.912070  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:30.912140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:30.951633  360776 cri.go:89] found id: ""
	I0229 02:17:30.951662  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.951674  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:30.951681  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:30.951725  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:30.988094  360776 cri.go:89] found id: ""
	I0229 02:17:30.988121  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.988133  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:30.988141  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:30.988197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:31.025379  360776 cri.go:89] found id: ""
	I0229 02:17:31.025405  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.025416  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:31.025423  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:31.025476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:31.064070  360776 cri.go:89] found id: ""
	I0229 02:17:31.064100  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.064112  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:31.064120  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:31.064178  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:31.106455  360776 cri.go:89] found id: ""
	I0229 02:17:31.106487  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.106498  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:31.106505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:31.106564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:31.141789  360776 cri.go:89] found id: ""
	I0229 02:17:31.141819  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.141830  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:31.141838  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:31.141985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:31.181781  360776 cri.go:89] found id: ""
	I0229 02:17:31.181807  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.181815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:31.181820  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:31.181877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:31.222653  360776 cri.go:89] found id: ""
	I0229 02:17:31.222687  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.222700  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:31.222713  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:31.222730  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:31.272067  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:31.272100  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:31.287890  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:31.287917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:31.370516  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:31.370545  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:31.370559  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:31.416216  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:31.416257  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:29.235795  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.237540  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.729967  360079 pod_ready.go:81] duration metric: took 4m0.001042569s waiting for pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace to be "Ready" ...
	E0229 02:17:31.729999  360079 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:17:31.730022  360079 pod_ready.go:38] duration metric: took 4m13.043743347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:31.730062  360079 kubeadm.go:640] restartCluster took 4m31.356459787s
	W0229 02:17:31.730347  360079 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:17:31.730404  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:17:29.777918  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.778158  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:30.307297  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:32.307846  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:34.309842  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:33.976724  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:33.991119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:33.991202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:34.038632  360776 cri.go:89] found id: ""
	I0229 02:17:34.038659  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.038668  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:34.038674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:34.038744  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:34.076069  360776 cri.go:89] found id: ""
	I0229 02:17:34.076109  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.076120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:34.076128  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:34.076212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:34.122220  360776 cri.go:89] found id: ""
	I0229 02:17:34.122246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.122256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:34.122265  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:34.122329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:34.163216  360776 cri.go:89] found id: ""
	I0229 02:17:34.163246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.163259  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:34.163268  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:34.163337  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:34.206631  360776 cri.go:89] found id: ""
	I0229 02:17:34.206679  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.206691  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:34.206698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:34.206766  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:34.250992  360776 cri.go:89] found id: ""
	I0229 02:17:34.251024  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.251037  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:34.251048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:34.251116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:34.289582  360776 cri.go:89] found id: ""
	I0229 02:17:34.289609  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.289620  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:34.289626  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:34.289690  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:34.335130  360776 cri.go:89] found id: ""
	I0229 02:17:34.335158  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.335169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:34.335182  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:34.335198  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:34.365870  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:34.365920  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:34.462536  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:34.462567  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:34.462585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:34.500235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:34.500281  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:34.551106  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:34.551146  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.104547  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:37.123303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:37.123367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:37.164350  360776 cri.go:89] found id: ""
	I0229 02:17:37.164378  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.164391  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:37.164401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:37.164466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:37.209965  360776 cri.go:89] found id: ""
	I0229 02:17:37.210000  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.210014  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:37.210023  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:37.210125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:37.253162  360776 cri.go:89] found id: ""
	I0229 02:17:37.253192  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.253205  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:37.253213  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:37.253293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:37.300836  360776 cri.go:89] found id: ""
	I0229 02:17:37.300862  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.300872  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:37.300880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:37.300944  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:37.343546  360776 cri.go:89] found id: ""
	I0229 02:17:37.343573  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.343585  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:37.343598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:37.343669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:37.044032  360079 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.313599592s)
	I0229 02:17:37.044103  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:37.062591  360079 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:17:37.074885  360079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:17:37.086583  360079 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:17:37.086639  360079 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:17:37.155776  360079 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 02:17:37.155861  360079 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:17:37.340395  360079 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:17:37.340526  360079 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:17:37.340643  360079 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:17:37.578733  360079 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:17:37.580576  360079 out.go:204]   - Generating certificates and keys ...
	I0229 02:17:37.580753  360079 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:17:37.580872  360079 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:17:37.580986  360079 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:17:37.581082  360079 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:17:37.581187  360079 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:17:37.581416  360079 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:17:37.581969  360079 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:17:37.582241  360079 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:17:37.582871  360079 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:17:37.583233  360079 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:17:37.583541  360079 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:17:37.583596  360079 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:17:37.843311  360079 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:17:37.914504  360079 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 02:17:38.039892  360079 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:17:38.271953  360079 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:17:38.514979  360079 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:17:38.515587  360079 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:17:38.518101  360079 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:17:34.279682  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:36.283111  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:38.780078  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:36.807145  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:39.305997  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:37.407526  360776 cri.go:89] found id: ""
	I0229 02:17:37.407554  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.407567  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:37.407574  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:37.407642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:37.486848  360776 cri.go:89] found id: ""
	I0229 02:17:37.486890  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.486902  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:37.486910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:37.486978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:37.529152  360776 cri.go:89] found id: ""
	I0229 02:17:37.529187  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.529199  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:37.529221  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:37.529238  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.594611  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:37.594642  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:37.612946  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:37.612980  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:37.697527  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:37.697552  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:37.697568  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:37.737130  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:37.737165  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.285260  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:40.302884  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:40.302962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:40.346431  360776 cri.go:89] found id: ""
	I0229 02:17:40.346463  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.346474  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:40.346481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:40.346547  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:40.403100  360776 cri.go:89] found id: ""
	I0229 02:17:40.403132  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.403147  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:40.403154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:40.403223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:40.466390  360776 cri.go:89] found id: ""
	I0229 02:17:40.466424  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.466435  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:40.466444  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:40.466516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:40.509811  360776 cri.go:89] found id: ""
	I0229 02:17:40.509840  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.509851  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:40.509859  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:40.509918  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:40.546249  360776 cri.go:89] found id: ""
	I0229 02:17:40.546281  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.546294  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:40.546302  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:40.546366  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:40.584490  360776 cri.go:89] found id: ""
	I0229 02:17:40.584520  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.584532  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:40.584540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:40.584602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:40.628397  360776 cri.go:89] found id: ""
	I0229 02:17:40.628427  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.628439  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:40.628447  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:40.628508  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:40.675557  360776 cri.go:89] found id: ""
	I0229 02:17:40.675584  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.675593  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:40.675603  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:40.675616  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:40.762140  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:40.762167  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:40.762192  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:40.808405  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:40.808444  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.860511  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:40.860553  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:40.929977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:40.930013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:38.519654  360079 out.go:204]   - Booting up control plane ...
	I0229 02:17:38.519770  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:17:38.520351  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:17:38.523272  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:17:38.545603  360079 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:17:38.547015  360079 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:17:38.547133  360079 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:17:38.713788  360079 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:17:40.780376  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:43.278958  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:41.308561  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:43.308710  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:44.718240  360079 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003956 seconds
	I0229 02:17:44.736859  360079 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:17:44.755878  360079 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:17:45.285373  360079 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:17:45.285648  360079 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-907398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:17:45.797261  360079 kubeadm.go:322] [bootstrap-token] Using token: 32tkap.hl2tmrs81t324g78
	I0229 02:17:45.798858  360079 out.go:204]   - Configuring RBAC rules ...
	I0229 02:17:45.798996  360079 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:17:45.805734  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:17:45.814737  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:17:45.818516  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:17:45.823668  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:17:45.827430  360079 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:17:45.842656  360079 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:17:46.096543  360079 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:17:46.292966  360079 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:17:46.293952  360079 kubeadm.go:322] 
	I0229 02:17:46.294055  360079 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:17:46.294075  360079 kubeadm.go:322] 
	I0229 02:17:46.294188  360079 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:17:46.294199  360079 kubeadm.go:322] 
	I0229 02:17:46.294231  360079 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:17:46.294314  360079 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:17:46.294432  360079 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:17:46.294454  360079 kubeadm.go:322] 
	I0229 02:17:46.294528  360079 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:17:46.294547  360079 kubeadm.go:322] 
	I0229 02:17:46.294635  360079 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:17:46.294657  360079 kubeadm.go:322] 
	I0229 02:17:46.294720  360079 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:17:46.294864  360079 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:17:46.294948  360079 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:17:46.294959  360079 kubeadm.go:322] 
	I0229 02:17:46.295078  360079 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:17:46.295174  360079 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:17:46.295185  360079 kubeadm.go:322] 
	I0229 02:17:46.295297  360079 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 32tkap.hl2tmrs81t324g78 \
	I0229 02:17:46.295404  360079 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:17:46.295441  360079 kubeadm.go:322] 	--control-plane 
	I0229 02:17:46.295448  360079 kubeadm.go:322] 
	I0229 02:17:46.295583  360079 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:17:46.295605  360079 kubeadm.go:322] 
	I0229 02:17:46.295770  360079 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 32tkap.hl2tmrs81t324g78 \
	I0229 02:17:46.295933  360079 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:17:46.298233  360079 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:17:46.298273  360079 cni.go:84] Creating CNI manager for ""
	I0229 02:17:46.298290  360079 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:17:46.300109  360079 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:17:43.449607  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:43.466367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:43.466441  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:43.504826  360776 cri.go:89] found id: ""
	I0229 02:17:43.504861  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.504873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:43.504880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:43.504946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:43.548641  360776 cri.go:89] found id: ""
	I0229 02:17:43.548682  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.548693  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:43.548701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:43.548760  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:43.591044  360776 cri.go:89] found id: ""
	I0229 02:17:43.591075  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.591085  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:43.591092  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:43.591152  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:43.639237  360776 cri.go:89] found id: ""
	I0229 02:17:43.639261  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.639269  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:43.639275  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:43.639329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:43.677231  360776 cri.go:89] found id: ""
	I0229 02:17:43.677264  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.677277  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:43.677285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:43.677359  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:43.721264  360776 cri.go:89] found id: ""
	I0229 02:17:43.721295  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.721306  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:43.721314  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:43.721379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:43.757248  360776 cri.go:89] found id: ""
	I0229 02:17:43.757281  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.757293  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:43.757300  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:43.757365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:43.802304  360776 cri.go:89] found id: ""
	I0229 02:17:43.802332  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.802343  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:43.802359  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:43.802375  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:43.855921  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:43.855949  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:43.869586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:43.869623  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:43.945526  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:43.945562  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:43.945579  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:43.987179  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:43.987215  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:46.537504  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:46.556578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:46.556653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:46.603983  360776 cri.go:89] found id: ""
	I0229 02:17:46.604012  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.604025  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:46.604037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:46.604107  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:46.657708  360776 cri.go:89] found id: ""
	I0229 02:17:46.657736  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.657747  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:46.657754  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:46.657820  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:46.708795  360776 cri.go:89] found id: ""
	I0229 02:17:46.708830  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.708843  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:46.708852  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:46.708920  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:46.758013  360776 cri.go:89] found id: ""
	I0229 02:17:46.758043  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.758056  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:46.758064  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:46.758157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:46.813107  360776 cri.go:89] found id: ""
	I0229 02:17:46.813138  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.813149  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:46.813156  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:46.813219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:46.859040  360776 cri.go:89] found id: ""
	I0229 02:17:46.859070  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.859081  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:46.859089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:46.859154  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:46.905302  360776 cri.go:89] found id: ""
	I0229 02:17:46.905334  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.905346  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:46.905354  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:46.905416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:46.950465  360776 cri.go:89] found id: ""
	I0229 02:17:46.950491  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.950502  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:46.950515  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:46.950530  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:47.035016  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:47.035044  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:47.035062  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:47.074108  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:47.074140  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:47.122149  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:47.122183  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:47.187233  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:47.187283  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:46.301876  360079 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:17:46.328857  360079 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:17:46.365095  360079 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:17:46.365210  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:46.365239  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=no-preload-907398 minikube.k8s.io/updated_at=2024_02_29T02_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:46.445475  360079 ops.go:34] apiserver oom_adj: -16
	I0229 02:17:46.712653  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:47.213595  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:47.713471  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:45.279713  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:47.778580  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:45.309019  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:47.808652  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:49.708451  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:49.727327  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:49.727383  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:49.775679  360776 cri.go:89] found id: ""
	I0229 02:17:49.775712  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.775723  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:49.775732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:49.775795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:49.821348  360776 cri.go:89] found id: ""
	I0229 02:17:49.821378  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.821387  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:49.821393  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:49.821459  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:49.864148  360776 cri.go:89] found id: ""
	I0229 02:17:49.864173  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.864182  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:49.864188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:49.864281  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:49.904720  360776 cri.go:89] found id: ""
	I0229 02:17:49.904747  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.904756  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:49.904768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:49.904835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:49.941952  360776 cri.go:89] found id: ""
	I0229 02:17:49.941976  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.941985  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:49.941992  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:49.942050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:49.987518  360776 cri.go:89] found id: ""
	I0229 02:17:49.987549  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.987559  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:49.987566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:49.987642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:50.030662  360776 cri.go:89] found id: ""
	I0229 02:17:50.030691  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.030700  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:50.030708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:50.030768  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:50.075564  360776 cri.go:89] found id: ""
	I0229 02:17:50.075594  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.075605  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:50.075617  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:50.075634  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:50.144223  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:50.144261  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:50.190615  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:50.190649  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:50.209014  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:50.209041  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:50.291096  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:50.291121  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:50.291135  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:48.213151  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:48.713484  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.212735  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.713172  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:50.213286  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:50.712875  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:51.213491  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:51.713354  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:52.212811  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:52.712670  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.779580  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:51.771065  360217 pod_ready.go:81] duration metric: took 4m0.00037351s waiting for pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace to be "Ready" ...
	E0229 02:17:51.771121  360217 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:17:51.771147  360217 pod_ready.go:38] duration metric: took 4m14.54716064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:51.771185  360217 kubeadm.go:640] restartCluster took 4m31.62028036s
	W0229 02:17:51.771272  360217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:17:51.771309  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:17:50.307305  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:52.309458  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:54.310095  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:52.827936  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:52.844926  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:52.845027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:52.892302  360776 cri.go:89] found id: ""
	I0229 02:17:52.892336  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.892349  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:52.892357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:52.892417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:52.943564  360776 cri.go:89] found id: ""
	I0229 02:17:52.943597  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.943607  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:52.943615  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:52.943683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:52.990217  360776 cri.go:89] found id: ""
	I0229 02:17:52.990251  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.990269  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:52.990278  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:52.990347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:53.038508  360776 cri.go:89] found id: ""
	I0229 02:17:53.038542  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.038554  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:53.038562  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:53.038622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:53.082156  360776 cri.go:89] found id: ""
	I0229 02:17:53.082184  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.082197  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:53.082205  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:53.082287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:53.149247  360776 cri.go:89] found id: ""
	I0229 02:17:53.149284  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.149295  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:53.149304  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:53.149371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:53.201169  360776 cri.go:89] found id: ""
	I0229 02:17:53.201199  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.201211  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:53.201219  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:53.201286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:53.268458  360776 cri.go:89] found id: ""
	I0229 02:17:53.268493  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.268507  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:53.268521  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:53.268546  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:53.288661  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:53.288708  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:53.371251  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:53.371277  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:53.371295  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:53.415981  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:53.416033  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:53.464558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:53.464600  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.030905  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:56.046625  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:56.046709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:56.090035  360776 cri.go:89] found id: ""
	I0229 02:17:56.090066  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.090094  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:56.090103  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:56.090176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:56.158245  360776 cri.go:89] found id: ""
	I0229 02:17:56.158276  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.158289  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:56.158297  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:56.158378  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:56.203917  360776 cri.go:89] found id: ""
	I0229 02:17:56.203947  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.203959  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:56.203967  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:56.204037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:56.267950  360776 cri.go:89] found id: ""
	I0229 02:17:56.267978  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.267995  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:56.268003  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:56.268065  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:56.312936  360776 cri.go:89] found id: ""
	I0229 02:17:56.312967  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.312979  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:56.312987  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:56.313050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:56.357548  360776 cri.go:89] found id: ""
	I0229 02:17:56.357584  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.357596  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:56.357605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:56.357674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:56.401842  360776 cri.go:89] found id: ""
	I0229 02:17:56.401876  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.401890  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:56.401898  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:56.401965  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:56.448506  360776 cri.go:89] found id: ""
	I0229 02:17:56.448538  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.448549  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:56.448562  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:56.448578  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.498783  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:56.498821  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:56.516722  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:56.516768  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:56.601770  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:56.601797  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:56.601815  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:56.642969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:56.643010  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:53.212697  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:53.712843  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:54.212762  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:54.713449  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:55.213612  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:55.712707  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:56.213635  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:56.713158  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.213615  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.713426  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.378120  360217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.606758107s)
	I0229 02:17:57.378252  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:57.396898  360217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:17:57.409107  360217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:17:57.420877  360217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:17:57.420927  360217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:17:57.486066  360217 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:17:57.486157  360217 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:17:57.660083  360217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:17:57.660277  360217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:17:57.660395  360217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:17:57.916360  360217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:17:58.213116  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:58.349580  360079 kubeadm.go:1088] duration metric: took 11.984450803s to wait for elevateKubeSystemPrivileges.
	I0229 02:17:58.349651  360079 kubeadm.go:406] StartCluster complete in 4m58.053023709s
	I0229 02:17:58.349775  360079 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:17:58.349948  360079 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:17:58.351856  360079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:17:58.352191  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:17:58.352353  360079 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:17:58.352434  360079 addons.go:69] Setting storage-provisioner=true in profile "no-preload-907398"
	I0229 02:17:58.352462  360079 addons.go:234] Setting addon storage-provisioner=true in "no-preload-907398"
	W0229 02:17:58.352474  360079 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:17:58.352492  360079 config.go:182] Loaded profile config "no-preload-907398": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 02:17:58.352546  360079 addons.go:69] Setting default-storageclass=true in profile "no-preload-907398"
	I0229 02:17:58.352600  360079 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-907398"
	I0229 02:17:58.352615  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353032  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353043  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353052  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353068  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353120  360079 addons.go:69] Setting metrics-server=true in profile "no-preload-907398"
	I0229 02:17:58.353134  360079 addons.go:234] Setting addon metrics-server=true in "no-preload-907398"
	W0229 02:17:58.353141  360079 addons.go:243] addon metrics-server should already be in state true
	I0229 02:17:58.353182  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353351  360079 addons.go:69] Setting dashboard=true in profile "no-preload-907398"
	I0229 02:17:58.353372  360079 addons.go:234] Setting addon dashboard=true in "no-preload-907398"
	W0229 02:17:58.353379  360079 addons.go:243] addon dashboard should already be in state true
	I0229 02:17:58.353416  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353501  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353521  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353780  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353802  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.374370  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0229 02:17:58.374457  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0229 02:17:58.374503  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0229 02:17:58.374564  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34767
	I0229 02:17:58.375443  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375468  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375533  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375559  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375998  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376013  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376104  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376118  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376153  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376166  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376242  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376255  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376604  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.376608  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.376642  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.377147  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377181  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.377256  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377274  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.377339  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.377532  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.377723  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377754  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.380332  360079 addons.go:234] Setting addon default-storageclass=true in "no-preload-907398"
	W0229 02:17:58.380348  360079 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:17:58.380373  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.380607  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.380620  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.399601  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0229 02:17:58.400286  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.400514  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I0229 02:17:58.401167  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.401184  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.401173  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.401760  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.402030  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.402970  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	W0229 02:17:58.403287  360079 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "no-preload-907398" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 02:17:58.403312  360079 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 02:17:58.403338  360079 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:17:58.405226  360079 out.go:177] * Verifying Kubernetes components...
	I0229 02:17:58.403538  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.403723  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.404198  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.406627  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.406718  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:58.412539  360079 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:17:58.407373  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.407398  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.414311  360079 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:17:58.414334  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:17:58.414352  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.412590  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.412844  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.413706  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0229 02:17:58.415059  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.415498  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.417082  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.417438  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.418583  360079 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:17:58.419735  360079 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:17:58.420843  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:17:58.420858  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:17:58.420876  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.418780  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.420946  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.420968  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.418281  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.422030  360079 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:17:57.917746  360217 out.go:204]   - Generating certificates and keys ...
	I0229 02:17:57.917859  360217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:17:57.917965  360217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:17:57.918411  360217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:17:57.918918  360217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:17:57.919445  360217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:17:57.919873  360217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:17:57.920396  360217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:17:57.920807  360217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:17:57.921322  360217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:17:57.921710  360217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:17:57.922094  360217 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:17:57.922176  360217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:17:58.103086  360217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:17:58.146435  360217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:17:58.422571  360217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:17:58.544422  360217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:17:58.545127  360217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:17:58.547666  360217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:17:58.549247  360217 out.go:204]   - Booting up control plane ...
	I0229 02:17:58.549352  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:17:58.549459  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:17:58.550242  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:17:58.577890  360217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:17:58.579022  360217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:17:58.579096  360217 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:17:58.733877  360217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:17:56.311800  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:58.809250  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:58.419456  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.421615  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.423246  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.423335  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:17:58.423343  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:17:58.423357  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.424461  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.424633  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.424741  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.424781  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.425249  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.425315  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.425145  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.425622  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.425732  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.425865  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.426305  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.430169  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.430190  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.430213  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.430221  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.430491  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.430917  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.430946  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.431346  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.431541  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.448561  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0229 02:17:58.449216  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.449840  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.449868  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.450301  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.450574  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.452414  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.452680  360079 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:17:58.452696  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:17:58.452714  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.455680  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.456155  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.456179  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.456414  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.456600  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.456726  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.457041  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.560024  360079 node_ready.go:35] waiting up to 6m0s for node "no-preload-907398" to be "Ready" ...
	I0229 02:17:58.560149  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:17:58.562721  360079 node_ready.go:49] node "no-preload-907398" has status "Ready":"True"
	I0229 02:17:58.562749  360079 node_ready.go:38] duration metric: took 2.693389ms waiting for node "no-preload-907398" to be "Ready" ...
	I0229 02:17:58.562767  360079 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:58.568960  360079 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.583361  360079 pod_ready.go:92] pod "etcd-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.583392  360079 pod_ready.go:81] duration metric: took 14.411119ms waiting for pod "etcd-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.583408  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.612395  360079 pod_ready.go:92] pod "kube-apiserver-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.612430  360079 pod_ready.go:81] duration metric: took 29.012395ms waiting for pod "kube-apiserver-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.612444  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.624710  360079 pod_ready.go:92] pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.624742  360079 pod_ready.go:81] duration metric: took 12.287509ms waiting for pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.624755  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.635770  360079 pod_ready.go:92] pod "kube-scheduler-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.635801  360079 pod_ready.go:81] duration metric: took 11.037539ms waiting for pod "kube-scheduler-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.635813  360079 pod_ready.go:38] duration metric: took 73.031722ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:58.635837  360079 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:17:58.635901  360079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:58.706760  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:17:58.712477  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:17:58.747607  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:17:58.747647  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:17:58.782941  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:17:58.782966  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:17:58.861056  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:17:58.861086  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:17:58.914123  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:17:58.914153  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:17:58.977830  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:17:58.977864  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:17:59.075704  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:17:59.075734  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:17:59.087287  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:17:59.087318  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:17:59.208828  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:17:59.208860  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:17:59.244139  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:17:59.335848  360079 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:17:59.335882  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:17:59.335906  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:17:59.335928  360079 api_server.go:72] duration metric: took 932.545738ms to wait for apiserver process to appear ...
	I0229 02:17:59.335948  360079 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:17:59.335972  360079 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0229 02:17:59.385781  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:17:59.385818  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:17:59.446518  360079 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0229 02:17:59.448251  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:17:59.448278  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:17:59.480111  360079 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:17:59.480149  360079 api_server.go:131] duration metric: took 144.191444ms to wait for apiserver health ...
	I0229 02:17:59.480161  360079 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:17:59.524432  360079 system_pods.go:59] 7 kube-system pods found
	I0229 02:17:59.524474  360079 system_pods.go:61] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending
	I0229 02:17:59.524481  360079 system_pods.go:61] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending
	I0229 02:17:59.524486  360079 system_pods.go:61] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.524492  360079 system_pods.go:61] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.524499  360079 system_pods.go:61] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.524508  360079 system_pods.go:61] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.524514  360079 system_pods.go:61] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.524526  360079 system_pods.go:74] duration metric: took 44.35791ms to wait for pod list to return data ...
	I0229 02:17:59.524539  360079 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:17:59.556701  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:17:59.556744  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:17:59.586815  360079 default_sa.go:45] found service account: "default"
	I0229 02:17:59.586867  360079 default_sa.go:55] duration metric: took 62.31539ms for default service account to be created ...
	I0229 02:17:59.586883  360079 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:17:59.613376  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:17:59.661179  360079 system_pods.go:86] 7 kube-system pods found
	I0229 02:17:59.661281  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending
	I0229 02:17:59.661305  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.661322  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.661342  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.661358  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.661376  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.661392  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.661424  360079 retry.go:31] will retry after 225.195811ms: missing components: kube-dns, kube-proxy
	I0229 02:17:59.900439  360079 system_pods.go:86] 7 kube-system pods found
	I0229 02:17:59.900490  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.900539  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.900555  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.900563  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.900576  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.900587  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.900597  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.900620  360079 retry.go:31] will retry after 348.416029ms: missing components: kube-dns, kube-proxy
	I0229 02:18:00.221814  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.509290892s)
	I0229 02:18:00.221894  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.221910  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.221939  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.515133599s)
	I0229 02:18:00.221984  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.221998  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.222483  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.222513  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.222695  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.222753  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.222784  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.222801  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.223074  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.223113  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.224083  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.224104  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.224115  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.224123  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.224355  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.224402  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.224415  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.254073  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.254130  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.256526  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.256546  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.256576  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.281620  360079 system_pods.go:86] 8 kube-system pods found
	I0229 02:18:00.281652  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.281658  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.281664  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:00.281671  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:00.281676  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:00.281681  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:00.281685  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:00.281695  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:00.281717  360079 retry.go:31] will retry after 374.602979ms: missing components: kube-dns, kube-proxy
	I0229 02:18:00.701978  360079 system_pods.go:86] 8 kube-system pods found
	I0229 02:18:00.702028  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.702039  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.702048  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:00.702059  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:00.702066  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:00.702075  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:00.702094  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:00.702107  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:00.702131  360079 retry.go:31] will retry after 563.29938ms: missing components: kube-dns, kube-proxy
	I0229 02:18:01.275888  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.031696303s)
	I0229 02:18:01.275958  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:01.275973  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:01.276375  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:01.276422  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:01.276435  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:01.276448  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:01.276473  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:01.276898  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:01.276957  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:01.277012  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:01.277032  360079 addons.go:470] Verifying addon metrics-server=true in "no-preload-907398"
	I0229 02:18:01.286612  360079 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:01.286655  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:01.286668  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:01.286676  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:01.286686  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:01.286697  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:01.286706  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:01.286716  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:01.286726  360079 system_pods.go:89] "metrics-server-57f55c9bc5-hln75" [8bfb6800-10c6-4154-8311-e568c1e146d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:01.286745  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:01.286772  360079 retry.go:31] will retry after 523.32187ms: missing components: kube-dns, kube-proxy
	I0229 02:18:01.829847  360079 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:01.829894  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Running
	I0229 02:18:01.829905  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Running
	I0229 02:18:01.829912  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:01.829924  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:01.829932  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:01.829938  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Running
	I0229 02:18:01.829944  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:01.829957  360079 system_pods.go:89] "metrics-server-57f55c9bc5-hln75" [8bfb6800-10c6-4154-8311-e568c1e146d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:01.829967  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:01.829989  360079 system_pods.go:126] duration metric: took 2.243096892s to wait for k8s-apps to be running ...
	I0229 02:18:01.830005  360079 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:18:01.830091  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:02.189987  360079 system_svc.go:56] duration metric: took 359.972364ms WaitForService to wait for kubelet.
	I0229 02:18:02.190024  360079 kubeadm.go:581] duration metric: took 3.786642999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:18:02.190050  360079 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:18:02.190227  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.576785344s)
	I0229 02:18:02.190281  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:02.190299  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:02.190727  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:02.190798  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:02.190810  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:02.190819  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:02.190827  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:02.193012  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:02.193025  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:02.193062  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:02.194791  360079 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-907398 addons enable metrics-server
	
	I0229 02:18:02.196317  360079 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0229 02:18:02.197863  360079 addons.go:505] enable addons completed in 3.84551804s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0229 02:18:02.210831  360079 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:18:02.210859  360079 node_conditions.go:123] node cpu capacity is 2
	I0229 02:18:02.210871  360079 node_conditions.go:105] duration metric: took 20.81411ms to run NodePressure ...
	I0229 02:18:02.210885  360079 start.go:228] waiting for startup goroutines ...
	I0229 02:18:02.210894  360079 start.go:233] waiting for cluster config update ...
	I0229 02:18:02.210911  360079 start.go:242] writing updated cluster config ...
	I0229 02:18:02.211195  360079 ssh_runner.go:195] Run: rm -f paused
	I0229 02:18:02.271875  360079 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:18:02.273687  360079 out.go:177] * Done! kubectl is now configured to use "no-preload-907398" cluster and "default" namespace by default
	I0229 02:17:59.194448  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:59.212378  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:59.212455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:59.272835  360776 cri.go:89] found id: ""
	I0229 02:17:59.272864  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.272873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:59.272879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:59.272945  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:59.326044  360776 cri.go:89] found id: ""
	I0229 02:17:59.326097  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.326110  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:59.326119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:59.326195  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:59.375112  360776 cri.go:89] found id: ""
	I0229 02:17:59.375147  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.375158  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:59.375165  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:59.375231  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:59.423465  360776 cri.go:89] found id: ""
	I0229 02:17:59.423489  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.423498  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:59.423504  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:59.423564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:59.464386  360776 cri.go:89] found id: ""
	I0229 02:17:59.464416  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.464427  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:59.464433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:59.464493  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:59.507714  360776 cri.go:89] found id: ""
	I0229 02:17:59.507746  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.507759  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:59.507768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:59.507836  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:59.563729  360776 cri.go:89] found id: ""
	I0229 02:17:59.563761  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.563773  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:59.563781  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:59.563869  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:59.623366  360776 cri.go:89] found id: ""
	I0229 02:17:59.623392  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.623404  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:59.623417  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:59.623432  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:59.700723  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:59.700783  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:59.722858  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:59.722904  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:59.830864  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:59.830892  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:59.830908  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:59.881944  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:59.881996  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:00.814212  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:03.310396  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:05.240170  360217 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.506059 seconds
	I0229 02:18:05.240365  360217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:18:05.258467  360217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:18:05.790274  360217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:18:05.790547  360217 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-254367 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:18:06.306317  360217 kubeadm.go:322] [bootstrap-token] Using token: up9wo1.za7nj6xpc5l7gy5b
	I0229 02:18:06.308235  360217 out.go:204]   - Configuring RBAC rules ...
	I0229 02:18:06.308376  360217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:18:06.317348  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:18:06.328386  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:18:06.333738  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:18:06.338257  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:18:06.342124  360217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:18:06.357763  360217 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:18:06.667301  360217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:18:06.893898  360217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:18:06.900021  360217 kubeadm.go:322] 
	I0229 02:18:06.900123  360217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:18:06.900136  360217 kubeadm.go:322] 
	I0229 02:18:06.900244  360217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:18:06.900251  360217 kubeadm.go:322] 
	I0229 02:18:06.900282  360217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:18:06.900361  360217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:18:06.900422  360217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:18:06.900428  360217 kubeadm.go:322] 
	I0229 02:18:06.900491  360217 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:18:06.900505  360217 kubeadm.go:322] 
	I0229 02:18:06.900564  360217 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:18:06.900570  360217 kubeadm.go:322] 
	I0229 02:18:06.900633  360217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:18:06.900725  360217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:18:06.900814  360217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:18:06.900832  360217 kubeadm.go:322] 
	I0229 02:18:06.900935  360217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:18:06.901029  360217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:18:06.901038  360217 kubeadm.go:322] 
	I0229 02:18:06.901139  360217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token up9wo1.za7nj6xpc5l7gy5b \
	I0229 02:18:06.901267  360217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:18:06.901296  360217 kubeadm.go:322] 	--control-plane 
	I0229 02:18:06.901302  360217 kubeadm.go:322] 
	I0229 02:18:06.901439  360217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:18:06.901447  360217 kubeadm.go:322] 
	I0229 02:18:06.901554  360217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token up9wo1.za7nj6xpc5l7gy5b \
	I0229 02:18:06.901681  360217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:18:06.904775  360217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:18:06.904839  360217 cni.go:84] Creating CNI manager for ""
	I0229 02:18:06.904862  360217 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:18:06.906658  360217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:18:02.462408  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:02.485957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:02.486017  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:02.540769  360776 cri.go:89] found id: ""
	I0229 02:18:02.540803  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.540814  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:02.540834  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:02.540902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:02.584488  360776 cri.go:89] found id: ""
	I0229 02:18:02.584514  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.584525  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:02.584532  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:02.584601  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:02.644908  360776 cri.go:89] found id: ""
	I0229 02:18:02.644943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.644956  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:02.644963  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:02.645031  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:02.702464  360776 cri.go:89] found id: ""
	I0229 02:18:02.702498  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.702510  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:02.702519  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:02.702587  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:02.754980  360776 cri.go:89] found id: ""
	I0229 02:18:02.755008  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.755020  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:02.755029  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:02.755101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:02.807863  360776 cri.go:89] found id: ""
	I0229 02:18:02.807890  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.807901  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:02.807908  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:02.807964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:02.850910  360776 cri.go:89] found id: ""
	I0229 02:18:02.850943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.850956  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:02.850964  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:02.851034  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:02.895792  360776 cri.go:89] found id: ""
	I0229 02:18:02.895832  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.895844  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:02.895857  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:02.895874  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:02.951353  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:02.951399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:02.970262  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:02.970303  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:03.055141  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:03.055165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:03.055182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:03.091751  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:03.091791  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:05.646070  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:05.663225  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:05.663301  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:05.712565  360776 cri.go:89] found id: ""
	I0229 02:18:05.712604  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.712623  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:05.712632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:05.712697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:05.761656  360776 cri.go:89] found id: ""
	I0229 02:18:05.761685  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.761699  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:05.761715  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:05.761780  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:05.805264  360776 cri.go:89] found id: ""
	I0229 02:18:05.805299  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.805310  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:05.805318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:05.805382  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:05.853483  360776 cri.go:89] found id: ""
	I0229 02:18:05.853555  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.853569  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:05.853578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:05.853653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:05.894561  360776 cri.go:89] found id: ""
	I0229 02:18:05.894589  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.894608  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:05.894616  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:05.894680  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:05.937784  360776 cri.go:89] found id: ""
	I0229 02:18:05.937816  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.937825  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:05.937832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:05.937900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:05.982000  360776 cri.go:89] found id: ""
	I0229 02:18:05.982028  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.982039  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:05.982046  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:05.982136  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:06.025395  360776 cri.go:89] found id: ""
	I0229 02:18:06.025430  360776 logs.go:276] 0 containers: []
	W0229 02:18:06.025443  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:06.025455  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:06.025470  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:06.078175  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:06.078221  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:06.106042  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:06.106097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:06.233485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:06.233506  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:06.233522  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:06.273517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:06.273557  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:06.908321  360217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:18:06.928907  360217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:18:06.976992  360217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:18:06.977068  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:06.977074  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-254367 minikube.k8s.io/updated_at=2024_02_29T02_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:07.053045  360217 ops.go:34] apiserver oom_adj: -16
	I0229 02:18:07.339410  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:07.840356  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:08.340151  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:08.840168  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:05.809727  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:08.311572  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:08.827599  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:08.845166  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:08.845270  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:08.891258  360776 cri.go:89] found id: ""
	I0229 02:18:08.891291  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.891303  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:08.891311  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:08.891381  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:08.936833  360776 cri.go:89] found id: ""
	I0229 02:18:08.936868  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.936879  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:08.936888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:08.936962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:08.979759  360776 cri.go:89] found id: ""
	I0229 02:18:08.979788  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.979800  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:08.979812  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:08.979878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:09.023686  360776 cri.go:89] found id: ""
	I0229 02:18:09.023722  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.023734  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:09.023744  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:09.023817  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:09.068374  360776 cri.go:89] found id: ""
	I0229 02:18:09.068413  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.068426  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:09.068434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:09.068502  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:09.147948  360776 cri.go:89] found id: ""
	I0229 02:18:09.147976  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.147985  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:09.147991  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:09.148043  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:09.202491  360776 cri.go:89] found id: ""
	I0229 02:18:09.202522  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.202534  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:09.202542  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:09.202605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:09.248957  360776 cri.go:89] found id: ""
	I0229 02:18:09.248992  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.249005  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:09.249018  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:09.249038  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:09.318433  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:09.318476  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:09.335205  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:09.335240  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:09.417917  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:09.417952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:09.417969  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:09.464739  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:09.464779  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:12.017825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:12.033452  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:12.033518  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:12.082587  360776 cri.go:89] found id: ""
	I0229 02:18:12.082621  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.082634  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:12.082642  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:12.082714  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:12.132662  360776 cri.go:89] found id: ""
	I0229 02:18:12.132696  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.132717  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:12.132725  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:12.132795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:12.204316  360776 cri.go:89] found id: ""
	I0229 02:18:12.204343  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.204351  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:12.204357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:12.204417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:12.255146  360776 cri.go:89] found id: ""
	I0229 02:18:12.255178  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.255190  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:12.255198  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:12.255265  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:12.299280  360776 cri.go:89] found id: ""
	I0229 02:18:12.299314  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.299328  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:12.299337  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:12.299410  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:12.340621  360776 cri.go:89] found id: ""
	I0229 02:18:12.340646  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.340658  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:12.340667  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:12.340722  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:09.339996  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:09.839471  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.340401  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.839457  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:11.340046  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:11.839746  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:12.339889  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:12.839469  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:13.339676  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:13.840012  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.809010  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:13.307420  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:12.391888  360776 cri.go:89] found id: ""
	I0229 02:18:12.391926  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.391938  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:12.391945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:12.392010  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:12.440219  360776 cri.go:89] found id: ""
	I0229 02:18:12.440250  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.440263  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:12.440276  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:12.440290  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:12.495586  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:12.495621  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:12.513608  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:12.513653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:12.587894  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:12.587929  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:12.587956  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:12.625496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:12.625533  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:15.187090  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:15.206990  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:15.207074  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:15.261493  360776 cri.go:89] found id: ""
	I0229 02:18:15.261522  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.261535  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:15.261543  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:15.261620  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:15.302408  360776 cri.go:89] found id: ""
	I0229 02:18:15.302437  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.302449  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:15.302457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:15.302524  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:15.340553  360776 cri.go:89] found id: ""
	I0229 02:18:15.340580  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.340590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:15.340598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:15.340661  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:15.383659  360776 cri.go:89] found id: ""
	I0229 02:18:15.383688  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.383699  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:15.383708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:15.383777  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:15.433164  360776 cri.go:89] found id: ""
	I0229 02:18:15.433200  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.433212  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:15.433220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:15.433293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:15.479950  360776 cri.go:89] found id: ""
	I0229 02:18:15.479993  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.480006  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:15.480014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:15.480078  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:15.519601  360776 cri.go:89] found id: ""
	I0229 02:18:15.519628  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.519637  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:15.519644  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:15.519707  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:15.564564  360776 cri.go:89] found id: ""
	I0229 02:18:15.564598  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.564610  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:15.564624  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:15.564643  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:15.615855  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:15.615894  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:15.632464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:15.632505  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:15.713177  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:15.713198  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:15.713214  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:15.749296  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:15.749326  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:14.340255  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:14.839541  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:15.339620  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:15.840469  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:16.339540  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:16.840203  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:17.339841  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:17.839673  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:18.339956  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:18.839965  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:19.023067  360217 kubeadm.go:1088] duration metric: took 12.046075339s to wait for elevateKubeSystemPrivileges.
	I0229 02:18:19.023110  360217 kubeadm.go:406] StartCluster complete in 4m58.952060994s
	I0229 02:18:19.023136  360217 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:19.023240  360217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:18:19.025049  360217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:19.027123  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:18:19.027409  360217 config.go:182] Loaded profile config "default-k8s-diff-port-254367": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:18:19.027464  360217 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:18:19.027538  360217 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.027561  360217 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.027576  360217 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:18:19.027588  360217 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.027620  360217 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-254367"
	I0229 02:18:19.027628  360217 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-254367"
	W0229 02:18:19.027633  360217 addons.go:243] addon dashboard should already be in state true
	I0229 02:18:19.027642  360217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-254367"
	I0229 02:18:19.027681  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028079  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028088  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028108  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.028114  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.027623  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028343  360217 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.028368  360217 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.028377  360217 addons.go:243] addon metrics-server should already be in state true
	I0229 02:18:19.028499  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028537  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.028563  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028931  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028959  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.047714  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0229 02:18:19.048288  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.048404  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0229 02:18:19.048502  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0229 02:18:19.048785  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.048915  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.049087  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049106  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049417  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049443  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049468  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.049605  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049623  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049632  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.049830  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.049990  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.050491  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.050525  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.050742  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.050780  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.052986  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0229 02:18:19.056042  360217 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.056065  360217 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:18:19.056101  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.056338  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.056649  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.056674  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.057319  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.057403  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.058140  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.059410  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.059437  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.069542  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0229 02:18:19.069932  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.070411  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.070438  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.070747  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.070987  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.072429  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.074634  360217 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:18:19.076733  360217 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:18:19.078676  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:18:19.078702  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:18:19.078723  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.078731  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0229 02:18:19.078949  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I0229 02:18:19.079355  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.079753  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.080120  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.080143  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.080374  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.080389  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.080491  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.080718  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.080832  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.081012  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.082727  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.083018  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.084629  360217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:18:19.083192  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.083785  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.086324  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:18:19.086355  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.087244  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0229 02:18:19.087643  360217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:18:19.088961  360217 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:18:19.088981  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:18:19.089000  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.087691  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:18:19.089061  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.087724  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.087806  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.087943  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.089282  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.089425  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.090396  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.090419  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.090890  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.091717  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.091743  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.092187  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.092654  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.092677  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.092801  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.093024  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.093212  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.093402  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.093539  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.093806  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.093828  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.093851  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.093940  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.094226  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.094421  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	W0229 02:18:19.100332  360217 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-254367" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 02:18:19.100363  360217 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 02:18:19.100388  360217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:18:19.101941  360217 out.go:177] * Verifying Kubernetes components...
	I0229 02:18:19.103689  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:19.114276  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I0229 02:18:19.114684  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.115166  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.115190  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.115557  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:15.308627  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:17.807561  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:19.808357  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:18.299689  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:18.315449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:18.315523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:18.357310  360776 cri.go:89] found id: ""
	I0229 02:18:18.357347  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.357360  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:18.357369  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:18.357427  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:18.410178  360776 cri.go:89] found id: ""
	I0229 02:18:18.410212  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.410224  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:18.410232  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:18.410300  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:18.452273  360776 cri.go:89] found id: ""
	I0229 02:18:18.452303  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.452315  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:18.452330  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:18.452398  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:18.493134  360776 cri.go:89] found id: ""
	I0229 02:18:18.493161  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.493170  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:18.493176  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:18.493247  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:18.530812  360776 cri.go:89] found id: ""
	I0229 02:18:18.530843  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.530855  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:18.530864  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:18.530931  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:18.572183  360776 cri.go:89] found id: ""
	I0229 02:18:18.572216  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.572231  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:18.572240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:18.572314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:18.612117  360776 cri.go:89] found id: ""
	I0229 02:18:18.612148  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.612160  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:18.612169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:18.612230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:18.653827  360776 cri.go:89] found id: ""
	I0229 02:18:18.653855  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.653866  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:18.653879  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:18.653898  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:18.688058  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:18.688094  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:18.735458  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:18.735493  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:18.795735  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:18.795780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:18.816207  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:18.816239  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:18.928414  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:21.429284  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:21.445010  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:21.445084  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:21.484084  360776 cri.go:89] found id: ""
	I0229 02:18:21.484128  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.484141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:21.484159  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:21.484223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:21.536516  360776 cri.go:89] found id: ""
	I0229 02:18:21.536550  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.536563  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:21.536571  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:21.536636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:21.588732  360776 cri.go:89] found id: ""
	I0229 02:18:21.588761  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.588773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:21.588782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:21.588843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:21.644434  360776 cri.go:89] found id: ""
	I0229 02:18:21.644470  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.644483  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:21.644491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:21.644560  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:21.685496  360776 cri.go:89] found id: ""
	I0229 02:18:21.685528  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.685540  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:21.685548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:21.685615  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:21.741146  360776 cri.go:89] found id: ""
	I0229 02:18:21.741176  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.741188  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:21.741196  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:21.741287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:21.790924  360776 cri.go:89] found id: ""
	I0229 02:18:21.790953  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.790964  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:21.790972  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:21.791040  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:21.843079  360776 cri.go:89] found id: ""
	I0229 02:18:21.843107  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.843118  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:21.843131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:21.843155  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:21.917006  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:21.917035  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:21.987268  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:21.987313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:22.009660  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:22.009699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:22.101976  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:22.102000  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:22.102017  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:19.115785  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.118586  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.118869  360217 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:18:19.118886  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:18:19.118905  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.121918  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.122332  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.122364  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.122552  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.122770  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.122996  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.123154  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.269274  360217 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-254367" to be "Ready" ...
	I0229 02:18:19.269550  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:18:19.282334  360217 node_ready.go:49] node "default-k8s-diff-port-254367" has status "Ready":"True"
	I0229 02:18:19.282362  360217 node_ready.go:38] duration metric: took 13.046941ms waiting for node "default-k8s-diff-port-254367" to be "Ready" ...
	I0229 02:18:19.282377  360217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:18:19.298326  360217 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.311217  360217 pod_ready.go:92] pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.311243  360217 pod_ready.go:81] duration metric: took 12.887306ms waiting for pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.311252  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.317185  360217 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.317210  360217 pod_ready.go:81] duration metric: took 5.951807ms waiting for pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.317219  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.330495  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:18:19.330519  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:18:19.331739  360217 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.331775  360217 pod_ready.go:81] duration metric: took 14.548327ms waiting for pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.331791  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dlgmz" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.363610  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:18:19.461745  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:18:19.461779  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:18:19.467030  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:18:19.467234  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:18:19.467253  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:18:19.568507  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:18:19.568540  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:18:19.641306  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:18:19.641346  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:18:19.750251  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:18:19.750282  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:18:19.807358  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:18:19.886145  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:18:19.886169  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:18:20.066662  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:18:20.066699  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:18:20.097965  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:18:20.097990  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:18:20.136049  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:18:20.136075  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:18:20.232757  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:18:20.232780  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:18:20.290653  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:18:20.290679  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:18:20.359549  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:18:21.354053  360217 pod_ready.go:102] pod "kube-proxy-dlgmz" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:21.788753  360217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.519159841s)
	I0229 02:18:21.788798  360217 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0229 02:18:22.362286  360217 pod_ready.go:92] pod "kube-proxy-dlgmz" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:22.362318  360217 pod_ready.go:81] duration metric: took 3.030515197s waiting for pod "kube-proxy-dlgmz" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.362331  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.392397  360217 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:22.392428  360217 pod_ready.go:81] duration metric: took 30.087397ms waiting for pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.392441  360217 pod_ready.go:38] duration metric: took 3.110051734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:18:22.392462  360217 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:18:22.392516  360217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:22.755340  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.288276833s)
	I0229 02:18:22.755387  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755402  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755534  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.948137303s)
	I0229 02:18:22.755568  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755581  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755693  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.392056284s)
	I0229 02:18:22.755714  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755723  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755982  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.756023  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.756037  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.756047  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.756052  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.756327  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.756341  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.756357  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.756366  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.760172  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760183  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760221  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760234  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760250  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760268  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760258  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760298  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760278  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760380  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.760390  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.760627  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760646  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760659  360217 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-254367"
	I0229 02:18:22.788927  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.788955  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.789219  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.789242  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.407247  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.047637799s)
	I0229 02:18:23.407257  360217 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.014711886s)
	I0229 02:18:23.407374  360217 api_server.go:72] duration metric: took 4.306954781s to wait for apiserver process to appear ...
	I0229 02:18:23.407399  360217 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:18:23.407433  360217 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8444/healthz ...
	I0229 02:18:23.407314  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:23.407545  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:23.407931  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:23.407948  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.407959  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:23.407967  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:23.408309  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:23.408318  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:23.408331  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.411220  360217 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-254367 addons enable metrics-server
	
	I0229 02:18:23.412663  360217 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0229 02:18:23.414033  360217 addons.go:505] enable addons completed in 4.386557527s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0229 02:18:23.439279  360217 api_server.go:279] https://192.168.72.88:8444/healthz returned 200:
	ok
	I0229 02:18:23.443380  360217 api_server.go:141] control plane version: v1.28.4
	I0229 02:18:23.443419  360217 api_server.go:131] duration metric: took 36.010336ms to wait for apiserver health ...
	I0229 02:18:23.443434  360217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:18:23.459207  360217 system_pods.go:59] 9 kube-system pods found
	I0229 02:18:23.459239  360217 system_pods.go:61] "coredns-5dd5756b68-vsxcv" [f2cabd39-df55-4e81-85d3-a745eb5533c6] Running
	I0229 02:18:23.459246  360217 system_pods.go:61] "coredns-5dd5756b68-x6qjk" [3a4370e5-86c3-4c8b-b275-70e55da74256] Running
	I0229 02:18:23.459253  360217 system_pods.go:61] "etcd-default-k8s-diff-port-254367" [5f2c758b-5068-4138-b2c1-b4161802f59f] Running
	I0229 02:18:23.459259  360217 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-254367" [bfd63194-f697-48ec-a594-9fb43acd5c1c] Running
	I0229 02:18:23.459265  360217 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-254367" [817f802d-a424-425d-89ae-8cab6c34c18d] Running
	I0229 02:18:23.459271  360217 system_pods.go:61] "kube-proxy-dlgmz" [0d9e6b25-c506-43a6-b1d2-e3906fcf7b92] Running
	I0229 02:18:23.459277  360217 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-254367" [fd8b2ce6-a716-4aa4-b09d-c83b4c9c3b90] Running
	I0229 02:18:23.459288  360217 system_pods.go:61] "metrics-server-57f55c9bc5-2wc8d" [da2ffb04-58a1-476a-8ea2-5e8d33512c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:23.459296  360217 system_pods.go:61] "storage-provisioner" [0e031ad8-0a53-4aa3-9a00-e03078b0db2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:23.459314  360217 system_pods.go:74] duration metric: took 15.86958ms to wait for pod list to return data ...
	I0229 02:18:23.459329  360217 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:18:23.464125  360217 default_sa.go:45] found service account: "default"
	I0229 02:18:23.464196  360217 default_sa.go:55] duration metric: took 4.855817ms for default service account to be created ...
	I0229 02:18:23.464222  360217 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:18:23.471833  360217 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:23.471861  360217 system_pods.go:89] "coredns-5dd5756b68-vsxcv" [f2cabd39-df55-4e81-85d3-a745eb5533c6] Running
	I0229 02:18:23.471869  360217 system_pods.go:89] "coredns-5dd5756b68-x6qjk" [3a4370e5-86c3-4c8b-b275-70e55da74256] Running
	I0229 02:18:23.471876  360217 system_pods.go:89] "etcd-default-k8s-diff-port-254367" [5f2c758b-5068-4138-b2c1-b4161802f59f] Running
	I0229 02:18:23.471883  360217 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-254367" [bfd63194-f697-48ec-a594-9fb43acd5c1c] Running
	I0229 02:18:23.471889  360217 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-254367" [817f802d-a424-425d-89ae-8cab6c34c18d] Running
	I0229 02:18:23.471896  360217 system_pods.go:89] "kube-proxy-dlgmz" [0d9e6b25-c506-43a6-b1d2-e3906fcf7b92] Running
	I0229 02:18:23.471908  360217 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-254367" [fd8b2ce6-a716-4aa4-b09d-c83b4c9c3b90] Running
	I0229 02:18:23.471917  360217 system_pods.go:89] "metrics-server-57f55c9bc5-2wc8d" [da2ffb04-58a1-476a-8ea2-5e8d33512c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:23.471927  360217 system_pods.go:89] "storage-provisioner" [0e031ad8-0a53-4aa3-9a00-e03078b0db2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:23.471943  360217 system_pods.go:126] duration metric: took 7.704603ms to wait for k8s-apps to be running ...
	I0229 02:18:23.471955  360217 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:18:23.472051  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:23.495777  360217 system_svc.go:56] duration metric: took 23.811126ms WaitForService to wait for kubelet.
	I0229 02:18:23.495810  360217 kubeadm.go:581] duration metric: took 4.395396941s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:18:23.495838  360217 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:18:23.502935  360217 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:18:23.502962  360217 node_conditions.go:123] node cpu capacity is 2
	I0229 02:18:23.502975  360217 node_conditions.go:105] duration metric: took 7.130297ms to run NodePressure ...
	I0229 02:18:23.502991  360217 start.go:228] waiting for startup goroutines ...
	I0229 02:18:23.503004  360217 start.go:233] waiting for cluster config update ...
	I0229 02:18:23.503019  360217 start.go:242] writing updated cluster config ...
	I0229 02:18:23.503329  360217 ssh_runner.go:195] Run: rm -f paused
	I0229 02:18:23.565856  360217 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:18:23.567626  360217 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-254367" cluster and "default" namespace by default
	I0229 02:18:21.812768  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:24.310049  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:24.648787  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:24.663511  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:24.663574  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:24.702299  360776 cri.go:89] found id: ""
	I0229 02:18:24.702329  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.702342  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:24.702349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:24.702414  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:24.741664  360776 cri.go:89] found id: ""
	I0229 02:18:24.741696  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.741708  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:24.741720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:24.741782  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:24.809755  360776 cri.go:89] found id: ""
	I0229 02:18:24.809788  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.809799  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:24.809807  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:24.809867  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:24.850308  360776 cri.go:89] found id: ""
	I0229 02:18:24.850335  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.850344  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:24.850351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:24.850408  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:24.903507  360776 cri.go:89] found id: ""
	I0229 02:18:24.903539  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.903551  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:24.903559  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:24.903624  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:24.952996  360776 cri.go:89] found id: ""
	I0229 02:18:24.953026  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.953039  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:24.953048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:24.953119  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:24.999301  360776 cri.go:89] found id: ""
	I0229 02:18:24.999334  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.999347  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:24.999355  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:24.999418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:25.044310  360776 cri.go:89] found id: ""
	I0229 02:18:25.044350  360776 logs.go:276] 0 containers: []
	W0229 02:18:25.044362  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:25.044375  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:25.044391  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:25.091374  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:25.091407  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:25.109080  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:25.109118  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:25.186611  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:25.186639  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:25.186663  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:25.226779  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:25.226825  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:26.320759  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:28.807091  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:27.775896  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:27.789596  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:27.789662  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:27.834159  360776 cri.go:89] found id: ""
	I0229 02:18:27.834186  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.834198  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:27.834207  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:27.834278  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:27.887355  360776 cri.go:89] found id: ""
	I0229 02:18:27.887386  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.887398  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:27.887407  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:27.887481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:27.927671  360776 cri.go:89] found id: ""
	I0229 02:18:27.927710  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.927724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:27.927740  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:27.927819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:27.983438  360776 cri.go:89] found id: ""
	I0229 02:18:27.983471  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.983484  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:27.983493  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:27.983562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:28.026112  360776 cri.go:89] found id: ""
	I0229 02:18:28.026143  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.026156  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:28.026238  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:28.026310  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:28.069085  360776 cri.go:89] found id: ""
	I0229 02:18:28.069118  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.069130  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:28.069138  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:28.069285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:28.115010  360776 cri.go:89] found id: ""
	I0229 02:18:28.115037  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.115046  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:28.115051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:28.115113  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:28.157726  360776 cri.go:89] found id: ""
	I0229 02:18:28.157756  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.157769  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:28.157783  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:28.157800  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:28.218148  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:28.218196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:28.238106  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:28.238142  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:28.328947  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:28.328971  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:28.328988  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:28.364795  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:28.364831  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:30.914422  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:30.929248  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:30.929334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:30.983535  360776 cri.go:89] found id: ""
	I0229 02:18:30.983566  360776 logs.go:276] 0 containers: []
	W0229 02:18:30.983577  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:30.983585  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:30.983644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:31.037809  360776 cri.go:89] found id: ""
	I0229 02:18:31.037842  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.037853  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:31.037862  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:31.037933  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:31.089101  360776 cri.go:89] found id: ""
	I0229 02:18:31.089134  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.089146  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:31.089154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:31.089219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:31.139413  360776 cri.go:89] found id: ""
	I0229 02:18:31.139444  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.139456  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:31.139463  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:31.139542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:31.177185  360776 cri.go:89] found id: ""
	I0229 02:18:31.177214  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.177223  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:31.177229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:31.177295  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:31.221339  360776 cri.go:89] found id: ""
	I0229 02:18:31.221374  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.221387  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:31.221395  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:31.221461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:31.261770  360776 cri.go:89] found id: ""
	I0229 02:18:31.261803  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.261815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:31.261824  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:31.261895  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:31.309126  360776 cri.go:89] found id: ""
	I0229 02:18:31.309157  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.309168  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:31.309179  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:31.309193  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:31.362509  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:31.362552  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:31.379334  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:31.379383  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:31.471339  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:31.471359  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:31.471372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:31.511126  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:31.511172  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:30.808454  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:33.308106  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:34.063372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:34.077222  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:34.077297  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:34.116752  360776 cri.go:89] found id: ""
	I0229 02:18:34.116793  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.116806  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:34.116815  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:34.116880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:34.157658  360776 cri.go:89] found id: ""
	I0229 02:18:34.157689  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.157700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:34.157708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:34.157779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:34.199922  360776 cri.go:89] found id: ""
	I0229 02:18:34.199957  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.199969  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:34.199977  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:34.200044  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:34.242474  360776 cri.go:89] found id: ""
	I0229 02:18:34.242505  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.242517  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:34.242526  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:34.242585  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:34.289308  360776 cri.go:89] found id: ""
	I0229 02:18:34.289338  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.289360  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:34.289367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:34.289431  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:34.335947  360776 cri.go:89] found id: ""
	I0229 02:18:34.335985  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.335997  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:34.336005  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:34.336073  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:34.377048  360776 cri.go:89] found id: ""
	I0229 02:18:34.377085  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.377097  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:34.377107  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:34.377181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:34.424208  360776 cri.go:89] found id: ""
	I0229 02:18:34.424238  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.424250  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:34.424270  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:34.424288  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:34.500223  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:34.500245  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:34.500263  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:34.534652  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:34.534688  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:34.593369  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:34.593405  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:34.646940  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:34.646982  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.169523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:37.184168  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:37.184245  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:37.232979  360776 cri.go:89] found id: ""
	I0229 02:18:37.233015  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.233026  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:37.233037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:37.233110  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:37.275771  360776 cri.go:89] found id: ""
	I0229 02:18:37.275796  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.275805  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:37.275811  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:37.275877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:37.322421  360776 cri.go:89] found id: ""
	I0229 02:18:37.322451  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.322460  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:37.322466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:37.322525  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:35.807858  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:38.307264  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:37.366974  360776 cri.go:89] found id: ""
	I0229 02:18:37.367001  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.367011  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:37.367020  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:37.367080  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:37.408780  360776 cri.go:89] found id: ""
	I0229 02:18:37.408811  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.408822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:37.408828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:37.408880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:37.447402  360776 cri.go:89] found id: ""
	I0229 02:18:37.447429  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.447441  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:37.447449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:37.447511  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:37.486454  360776 cri.go:89] found id: ""
	I0229 02:18:37.486491  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.486502  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:37.486510  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:37.486579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:37.531484  360776 cri.go:89] found id: ""
	I0229 02:18:37.531517  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.531533  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:37.531545  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:37.531562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:37.581274  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:37.581312  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.601745  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:37.601777  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:37.707773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:37.707801  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:37.707818  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:37.740658  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:37.740698  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.296427  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:40.311365  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:40.311439  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:40.354647  360776 cri.go:89] found id: ""
	I0229 02:18:40.354675  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.354693  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:40.354701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:40.354769  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:40.400490  360776 cri.go:89] found id: ""
	I0229 02:18:40.400520  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.400529  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:40.400535  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:40.400602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:40.442029  360776 cri.go:89] found id: ""
	I0229 02:18:40.442051  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.442060  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:40.442065  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:40.442169  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:40.481183  360776 cri.go:89] found id: ""
	I0229 02:18:40.481216  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.481228  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:40.481237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:40.481316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:40.523076  360776 cri.go:89] found id: ""
	I0229 02:18:40.523104  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.523113  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:40.523118  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:40.523209  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:40.561787  360776 cri.go:89] found id: ""
	I0229 02:18:40.561817  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.561826  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:40.561832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:40.561908  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:40.598621  360776 cri.go:89] found id: ""
	I0229 02:18:40.598647  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.598655  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:40.598662  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:40.598710  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:40.637701  360776 cri.go:89] found id: ""
	I0229 02:18:40.637734  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.637745  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:40.637758  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:40.637775  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.685317  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:40.685351  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:40.735348  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:40.735386  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:40.751373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:40.751434  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:40.822604  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:40.822624  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:40.822637  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:40.311266  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:42.806740  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:44.809136  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:43.357769  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:43.373119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:43.373186  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:43.409160  360776 cri.go:89] found id: ""
	I0229 02:18:43.409181  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.409189  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:43.409195  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:43.409238  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:43.447193  360776 cri.go:89] found id: ""
	I0229 02:18:43.447222  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.447231  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:43.447237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:43.447296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:43.487906  360776 cri.go:89] found id: ""
	I0229 02:18:43.487934  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.487942  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:43.487949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:43.488008  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:43.527968  360776 cri.go:89] found id: ""
	I0229 02:18:43.528002  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.528016  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:43.528024  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:43.528100  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:43.573298  360776 cri.go:89] found id: ""
	I0229 02:18:43.573333  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.573344  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:43.573351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:43.573461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:43.630816  360776 cri.go:89] found id: ""
	I0229 02:18:43.630856  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.630867  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:43.630881  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:43.630954  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:43.701516  360776 cri.go:89] found id: ""
	I0229 02:18:43.701547  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.701559  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:43.701567  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:43.701636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:43.747444  360776 cri.go:89] found id: ""
	I0229 02:18:43.747474  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.747484  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:43.747494  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:43.747510  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:43.828216  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:43.828246  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:43.828270  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:43.874647  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:43.874684  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:43.937776  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:43.937808  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:43.989210  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:43.989250  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.506056  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:46.519717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:46.519784  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:46.585095  360776 cri.go:89] found id: ""
	I0229 02:18:46.585128  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.585141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:46.585149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:46.585212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:46.638520  360776 cri.go:89] found id: ""
	I0229 02:18:46.638553  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.638565  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:46.638572  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:46.638637  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:46.691413  360776 cri.go:89] found id: ""
	I0229 02:18:46.691446  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.691458  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:46.691466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:46.691532  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:46.735054  360776 cri.go:89] found id: ""
	I0229 02:18:46.735083  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.735092  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:46.735098  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:46.735159  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:46.772486  360776 cri.go:89] found id: ""
	I0229 02:18:46.772531  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.772543  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:46.772551  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:46.772610  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:46.815466  360776 cri.go:89] found id: ""
	I0229 02:18:46.815491  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.815499  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:46.815505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:46.815553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:46.853168  360776 cri.go:89] found id: ""
	I0229 02:18:46.853199  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.853212  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:46.853220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:46.853299  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:46.894320  360776 cri.go:89] found id: ""
	I0229 02:18:46.894353  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.894365  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:46.894378  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:46.894394  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:46.944593  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:46.944631  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.960405  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:46.960433  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:47.029929  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:47.029960  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:47.029977  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:47.065292  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:47.065327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:47.308699  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:49.808633  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:49.620521  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:49.636247  360776 kubeadm.go:640] restartCluster took 4m12.880265518s
	W0229 02:18:49.636335  360776 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:18:49.636372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:18:50.114412  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:50.130257  360776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:18:50.141556  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:18:50.152882  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:18:50.152929  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:18:50.213815  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:18:50.213922  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:18:50.341927  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:18:50.342103  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:18:50.342249  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:18:50.577201  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:18:50.578563  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:18:50.587158  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:18:50.712207  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:18:50.714032  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:18:50.714149  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:18:50.716103  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:18:50.717503  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:18:50.718203  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:18:50.719194  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:18:50.719913  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:18:50.721364  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:18:50.722412  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:18:50.723087  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:18:50.723663  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:18:50.723813  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:18:50.724029  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:18:51.003432  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:18:51.145978  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:18:51.230808  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:18:51.340889  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:18:51.341726  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:18:51.343443  360776 out.go:204]   - Booting up control plane ...
	I0229 02:18:51.343564  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:18:51.347723  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:18:51.348592  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:18:51.349514  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:18:51.352720  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:18:52.307313  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:54.806310  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:56.806412  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:58.806973  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:01.306043  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:03.308131  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:05.308210  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:07.807594  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:09.812481  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:12.308103  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:14.310513  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:16.806841  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:18.807740  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:21.306666  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:23.307064  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:25.806451  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:27.806822  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:29.807253  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:31.352923  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:19:31.353370  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:31.353570  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:32.307377  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:34.309850  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:36.354842  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:36.355179  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:36.806074  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:38.807249  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:41.306690  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:43.308582  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:46.356431  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:46.356735  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:45.309102  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:47.808426  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:50.306270  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:52.307628  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:54.806254  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:56.800277  361093 pod_ready.go:81] duration metric: took 4m0.000614636s waiting for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" ...
	E0229 02:19:56.800308  361093 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:19:56.800332  361093 pod_ready.go:38] duration metric: took 4m14.556158159s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:56.800367  361093 kubeadm.go:640] restartCluster took 4m32.656788973s
	W0229 02:19:56.800444  361093 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:19:56.800489  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:20:01.980143  361093 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.179624969s)
	I0229 02:20:01.980234  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:01.996633  361093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:20:02.007422  361093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:20:02.017783  361093 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:20:02.017835  361093 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:20:02.234279  361093 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:20:06.357825  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:06.358110  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:10.891699  361093 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:20:10.891827  361093 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:20:10.891929  361093 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:20:10.892046  361093 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:20:10.892166  361093 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:20:10.892275  361093 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:20:10.893594  361093 out.go:204]   - Generating certificates and keys ...
	I0229 02:20:10.893681  361093 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:20:10.893781  361093 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:20:10.893878  361093 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:20:10.893977  361093 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:20:10.894061  361093 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:20:10.894150  361093 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:20:10.894255  361093 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:20:10.894353  361093 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:20:10.894466  361093 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:20:10.894563  361093 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:20:10.894619  361093 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:20:10.894689  361093 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:20:10.894754  361093 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:20:10.894831  361093 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:20:10.894919  361093 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:20:10.895000  361093 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:20:10.895120  361093 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:20:10.895214  361093 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:20:10.897074  361093 out.go:204]   - Booting up control plane ...
	I0229 02:20:10.897177  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:20:10.897301  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:20:10.897401  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:20:10.897546  361093 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:20:10.897655  361093 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:20:10.897730  361093 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:20:10.897955  361093 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:20:10.898072  361093 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003481 seconds
	I0229 02:20:10.898235  361093 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:20:10.898362  361093 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:20:10.898450  361093 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:20:10.898685  361093 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-665766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:20:10.898770  361093 kubeadm.go:322] [bootstrap-token] Using token: 269xha.46kssuu5kaip43vm
	I0229 02:20:10.899874  361093 out.go:204]   - Configuring RBAC rules ...
	I0229 02:20:10.899970  361093 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:20:10.900078  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:20:10.900198  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:20:10.900334  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:20:10.900513  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:20:10.900636  361093 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:20:10.900771  361093 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:20:10.900814  361093 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:20:10.900864  361093 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:20:10.900874  361093 kubeadm.go:322] 
	I0229 02:20:10.900929  361093 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:20:10.900935  361093 kubeadm.go:322] 
	I0229 02:20:10.901047  361093 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:20:10.901067  361093 kubeadm.go:322] 
	I0229 02:20:10.901106  361093 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:20:10.901184  361093 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:20:10.901249  361093 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:20:10.901259  361093 kubeadm.go:322] 
	I0229 02:20:10.901323  361093 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:20:10.901335  361093 kubeadm.go:322] 
	I0229 02:20:10.901410  361093 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:20:10.901421  361093 kubeadm.go:322] 
	I0229 02:20:10.901485  361093 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:20:10.901585  361093 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:20:10.901691  361093 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:20:10.901702  361093 kubeadm.go:322] 
	I0229 02:20:10.901773  361093 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:20:10.901869  361093 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:20:10.901881  361093 kubeadm.go:322] 
	I0229 02:20:10.901991  361093 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 269xha.46kssuu5kaip43vm \
	I0229 02:20:10.902122  361093 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:20:10.902144  361093 kubeadm.go:322] 	--control-plane 
	I0229 02:20:10.902149  361093 kubeadm.go:322] 
	I0229 02:20:10.902254  361093 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:20:10.902273  361093 kubeadm.go:322] 
	I0229 02:20:10.902377  361093 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 269xha.46kssuu5kaip43vm \
	I0229 02:20:10.902520  361093 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:20:10.902534  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:20:10.902541  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:20:10.904582  361093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:20:10.905676  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:20:10.930137  361093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:20:10.979891  361093 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:20:10.980027  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=embed-certs-665766 minikube.k8s.io/updated_at=2024_02_29T02_20_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:10.980030  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:11.079204  361093 ops.go:34] apiserver oom_adj: -16
	I0229 02:20:11.314252  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:11.814676  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:12.315103  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:12.814906  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:13.314822  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:13.814328  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:14.314397  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:14.814464  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:15.315077  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:15.814758  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:16.314975  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:16.815307  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:17.315305  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:17.814371  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:18.315148  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:18.814336  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:19.314531  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:19.814983  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:20.314365  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:20.815167  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:21.314560  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:21.814519  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:22.315326  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:22.814733  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:23.315210  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:23.460714  361093 kubeadm.go:1088] duration metric: took 12.480754596s to wait for elevateKubeSystemPrivileges.
	I0229 02:20:23.460760  361093 kubeadm.go:406] StartCluster complete in 4m59.384955855s
	I0229 02:20:23.460835  361093 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:20:23.460963  361093 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:20:23.462373  361093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:20:23.462619  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:20:23.462712  361093 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:20:23.462806  361093 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-665766"
	I0229 02:20:23.462833  361093 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-665766"
	I0229 02:20:23.462842  361093 addons.go:69] Setting dashboard=true in profile "embed-certs-665766"
	W0229 02:20:23.462848  361093 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:20:23.462878  361093 addons.go:234] Setting addon dashboard=true in "embed-certs-665766"
	W0229 02:20:23.462887  361093 addons.go:243] addon dashboard should already be in state true
	I0229 02:20:23.462885  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:20:23.462865  361093 addons.go:69] Setting metrics-server=true in profile "embed-certs-665766"
	I0229 02:20:23.462912  361093 addons.go:234] Setting addon metrics-server=true in "embed-certs-665766"
	I0229 02:20:23.462837  361093 addons.go:69] Setting default-storageclass=true in profile "embed-certs-665766"
	I0229 02:20:23.462940  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	W0229 02:20:23.462921  361093 addons.go:243] addon metrics-server should already be in state true
	I0229 02:20:23.462988  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.462939  361093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-665766"
	I0229 02:20:23.462940  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.463367  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463390  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463409  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463414  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463390  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463448  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463573  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463594  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.484706  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0229 02:20:23.484734  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0229 02:20:23.484744  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0229 02:20:23.484867  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0229 02:20:23.485323  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485340  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485376  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485416  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485852  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485859  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485870  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485878  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.485875  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.485887  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.486261  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486314  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486428  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.486441  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.486554  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486728  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.486962  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.487011  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.487123  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.487168  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.487916  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.488429  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.488468  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.490061  361093 addons.go:234] Setting addon default-storageclass=true in "embed-certs-665766"
	W0229 02:20:23.490105  361093 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:20:23.490135  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.490519  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.490554  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.505714  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I0229 02:20:23.506382  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.506952  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0229 02:20:23.507108  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.507125  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.507297  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.507838  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.508574  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.508601  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.508856  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0229 02:20:23.509055  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I0229 02:20:23.509239  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.509409  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.509420  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.509928  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.509971  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.510020  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.510043  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.510427  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.510446  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.510456  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.510457  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.510836  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.510844  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.511614  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.512674  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.512911  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.514837  361093 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:20:23.516144  361093 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:20:23.513612  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.518587  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:20:23.518631  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:20:23.519750  361093 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:20:23.520898  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:20:23.520912  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:20:23.520925  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.519796  361093 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:20:23.519826  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.522245  361093 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:20:23.522263  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:20:23.522279  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.525267  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.525478  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0229 02:20:23.525918  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.525942  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526065  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.526171  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526249  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.526364  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.526620  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.526677  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.526706  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526865  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526876  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.526891  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.527094  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.527286  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.527370  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.527392  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.527414  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.527426  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.527431  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.527440  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.527600  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.527770  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.527837  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.527921  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.528137  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.529551  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.529764  361093 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:20:23.529779  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:20:23.529795  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.532530  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.532935  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.532987  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.533201  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.533347  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.533475  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.533597  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.717181  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:20:23.718730  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:20:23.718746  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:20:23.751609  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:20:23.751628  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:20:23.774666  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:20:23.783425  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:20:23.783444  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:20:23.799321  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:20:23.843414  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:20:23.843438  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:20:23.857004  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:20:23.857027  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:20:23.930205  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:20:23.930233  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:20:23.943684  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:20:23.970259  361093 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-665766" context rescaled to 1 replicas
	I0229 02:20:23.970298  361093 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:20:23.972009  361093 out.go:177] * Verifying Kubernetes components...
	I0229 02:20:23.973240  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:24.061065  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:20:24.061103  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:20:24.147407  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:20:24.147441  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:20:24.204201  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:20:24.204236  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:20:24.243191  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:20:24.243237  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:20:24.263274  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:20:24.263299  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:20:24.283356  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:20:24.283374  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:20:24.303371  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:20:25.432821  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.715600333s)
	I0229 02:20:25.432877  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.432884  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.433179  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.433198  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.433214  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.433223  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.433233  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:25.433477  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.433499  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.433519  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:25.441485  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.441506  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.441772  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.441788  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.803307  361093 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.028599375s)
	I0229 02:20:25.803341  361093 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 02:20:26.329323  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.529964751s)
	I0229 02:20:26.329380  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.329389  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.329754  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.329817  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.329838  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.329836  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:26.329847  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.330130  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.330149  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.330176  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:26.411660  361093 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.438378455s)
	I0229 02:20:26.411727  361093 node_ready.go:35] waiting up to 6m0s for node "embed-certs-665766" to be "Ready" ...
	I0229 02:20:26.411785  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.468059693s)
	I0229 02:20:26.411846  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.411904  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.412327  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.412378  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.412400  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.412418  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.412733  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.412759  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.412778  361093 addons.go:470] Verifying addon metrics-server=true in "embed-certs-665766"
	I0229 02:20:26.429799  361093 node_ready.go:49] node "embed-certs-665766" has status "Ready":"True"
	I0229 02:20:26.429834  361093 node_ready.go:38] duration metric: took 18.091958ms waiting for node "embed-certs-665766" to be "Ready" ...
	I0229 02:20:26.429848  361093 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:20:26.443918  361093 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.453871  361093 pod_ready.go:92] pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.453893  361093 pod_ready.go:81] duration metric: took 9.938572ms waiting for pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.453902  361093 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.459920  361093 pod_ready.go:92] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.459946  361093 pod_ready.go:81] duration metric: took 6.037204ms waiting for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.459959  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.465595  361093 pod_ready.go:92] pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.465611  361093 pod_ready.go:81] duration metric: took 5.645555ms waiting for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.465620  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.470943  361093 pod_ready.go:92] pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.470960  361093 pod_ready.go:81] duration metric: took 5.334268ms waiting for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.470968  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gtjq6" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.815785  361093 pod_ready.go:92] pod "kube-proxy-gtjq6" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.815809  361093 pod_ready.go:81] duration metric: took 344.835753ms waiting for pod "kube-proxy-gtjq6" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.815820  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:27.179678  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.87625995s)
	I0229 02:20:27.179741  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:27.179758  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:27.180115  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:27.180169  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:27.180191  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:27.180201  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:27.180212  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:27.180476  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:27.180521  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:27.180534  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:27.182123  361093 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-665766 addons enable metrics-server
	
	I0229 02:20:27.183370  361093 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0229 02:20:27.184639  361093 addons.go:505] enable addons completed in 3.721930887s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0229 02:20:27.223120  361093 pod_ready.go:92] pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:27.223149  361093 pod_ready.go:81] duration metric: took 407.321396ms waiting for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:27.223163  361093 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:29.231076  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:31.729827  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:33.745431  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:36.231699  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:38.238868  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:40.733145  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:43.231183  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:46.359040  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:46.359315  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:46.359346  360776 kubeadm.go:322] 
	I0229 02:20:46.359398  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:20:46.359458  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:20:46.359467  360776 kubeadm.go:322] 
	I0229 02:20:46.359511  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:20:46.359565  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:20:46.359711  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:20:46.359720  360776 kubeadm.go:322] 
	I0229 02:20:46.359823  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:20:46.359867  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:20:46.359894  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:20:46.359900  360776 kubeadm.go:322] 
	I0229 02:20:46.360005  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:20:46.360128  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:20:46.360236  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:20:46.360310  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:20:46.360381  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:20:46.360410  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:20:46.361502  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:20:46.361603  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:20:46.361688  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:20:46.361890  360776 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:20:46.361946  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:20:46.833083  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:46.850670  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:20:46.863291  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:20:46.863352  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:20:46.929466  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:20:46.929532  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:20:47.064941  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:20:47.065277  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:20:47.065515  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:20:47.284721  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:20:47.285859  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:20:47.295028  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:20:47.429614  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:20:47.431229  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:20:47.431315  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:20:47.431389  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:20:47.431487  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:20:47.431603  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:20:47.431719  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:20:47.431796  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:20:47.431890  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:20:47.431974  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:20:47.432093  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:20:47.432212  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:20:47.432275  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:20:47.432366  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:20:47.946255  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:20:48.258186  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:20:48.398982  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:20:48.545961  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:20:48.546829  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:20:45.234594  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:47.731325  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:49.731500  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:48.548500  360776 out.go:204]   - Booting up control plane ...
	I0229 02:20:48.548614  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:20:48.552604  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:20:48.553548  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:20:48.554256  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:20:48.558508  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:20:52.231128  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:54.231680  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:56.730802  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:58.731112  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:01.232479  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:03.234385  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:05.730268  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:08.231970  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:10.233205  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:12.734859  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:15.230796  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:17.231363  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:19.231526  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:21.731071  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:23.732749  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:26.230929  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:28.731131  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:28.560199  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:21:28.560645  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:28.560944  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:31.231022  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:33.731025  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:33.561853  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:33.562057  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:35.731752  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:38.229754  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:40.229986  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:42.730384  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:44.730788  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:43.562844  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:43.563063  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:46.731643  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:49.232075  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:51.729864  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:53.730399  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:55.730728  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:57.732563  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:00.232769  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:02.233327  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:04.730582  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:03.563980  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:03.564274  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:06.730978  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:08.731753  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:10.733273  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:13.230888  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:15.231384  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:17.233309  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:19.736876  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:22.231745  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:24.730148  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:26.730332  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:28.731241  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:31.232262  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:33.729969  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:36.230298  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:38.232199  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:43.566143  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:43.566419  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:43.566432  360776 kubeadm.go:322] 
	I0229 02:22:43.566494  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:22:43.566562  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:22:43.566573  360776 kubeadm.go:322] 
	I0229 02:22:43.566621  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:22:43.566669  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:22:43.566789  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:22:43.566798  360776 kubeadm.go:322] 
	I0229 02:22:43.566954  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:22:43.567000  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:22:43.567049  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:22:43.567060  360776 kubeadm.go:322] 
	I0229 02:22:43.567282  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:22:43.567417  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:22:43.567521  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:22:43.567592  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:22:43.567684  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:22:43.567736  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:22:43.568136  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:22:43.568244  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:22:43.568368  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:22:43.568439  360776 kubeadm.go:406] StartCluster complete in 8m6.863500244s
	I0229 02:22:43.568498  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:22:43.568644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:22:43.619887  360776 cri.go:89] found id: ""
	I0229 02:22:43.619917  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.619926  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:22:43.619932  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:22:43.619996  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:22:43.658073  360776 cri.go:89] found id: ""
	I0229 02:22:43.658110  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.658120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:22:43.658127  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:22:43.658197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:22:43.697445  360776 cri.go:89] found id: ""
	I0229 02:22:43.697476  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.697489  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:22:43.697495  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:22:43.697561  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:22:43.736241  360776 cri.go:89] found id: ""
	I0229 02:22:43.736270  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.736278  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:22:43.736285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:22:43.736345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:22:43.775185  360776 cri.go:89] found id: ""
	I0229 02:22:43.775212  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.775221  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:22:43.775227  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:22:43.775292  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:22:43.815309  360776 cri.go:89] found id: ""
	I0229 02:22:43.815338  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.815347  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:22:43.815353  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:22:43.815436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:22:43.860248  360776 cri.go:89] found id: ""
	I0229 02:22:43.860284  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.860296  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:22:43.860305  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:22:43.860375  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:22:43.918615  360776 cri.go:89] found id: ""
	I0229 02:22:43.918644  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.918656  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:22:43.918671  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:22:43.918687  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:22:43.966006  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:22:43.966045  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:22:43.981843  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:22:43.981875  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:22:44.056838  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:22:44.056870  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:22:44.056887  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:22:44.090353  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:22:44.090384  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:22:44.143169  360776 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:22:44.143235  360776 out.go:239] * 
	W0229 02:22:44.143336  360776 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.143366  360776 out.go:239] * 
	W0229 02:22:44.144361  360776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:22:44.147267  360776 out.go:177] 
	W0229 02:22:44.148417  360776 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.148458  360776 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:22:44.148476  360776 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:22:44.149710  360776 out.go:177] 
	I0229 02:22:40.731211  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:43.230524  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:45.232018  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:47.731166  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:50.231074  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:52.731967  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:54.732431  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:57.230523  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:59.230839  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:01.231188  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:03.730692  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:05.731139  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:08.229972  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:10.230875  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:12.731348  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:15.233235  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:17.730643  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:20.232963  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:22.730485  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:24.730676  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:26.731120  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:29.230981  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:31.730910  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:34.231238  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:36.232335  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:38.731165  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:40.731274  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:43.232341  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:45.731736  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:48.230390  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:50.740709  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:53.230645  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:55.730726  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:57.730949  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:59.732968  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:02.230504  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:04.732474  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:07.230833  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:09.730847  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:11.730927  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:14.231274  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:16.729839  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:18.731051  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:21.231048  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:23.731084  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:26.229186  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:27.229797  361093 pod_ready.go:81] duration metric: took 4m0.006619539s waiting for pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace to be "Ready" ...
	E0229 02:24:27.229822  361093 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:24:27.229831  361093 pod_ready.go:38] duration metric: took 4m0.799971766s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:24:27.229884  361093 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:24:27.229929  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:24:27.229995  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:24:27.291934  361093 cri.go:89] found id: "ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:27.291961  361093 cri.go:89] found id: ""
	I0229 02:24:27.291970  361093 logs.go:276] 1 containers: [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c]
	I0229 02:24:27.292035  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.297949  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:24:27.298016  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:24:27.339415  361093 cri.go:89] found id: "305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:27.339442  361093 cri.go:89] found id: ""
	I0229 02:24:27.339453  361093 logs.go:276] 1 containers: [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff]
	I0229 02:24:27.339507  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.345127  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:24:27.345177  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:24:27.387015  361093 cri.go:89] found id: "44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:27.387037  361093 cri.go:89] found id: ""
	I0229 02:24:27.387046  361093 logs.go:276] 1 containers: [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9]
	I0229 02:24:27.387102  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.393582  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:24:27.393642  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:24:27.433094  361093 cri.go:89] found id: "a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:27.433119  361093 cri.go:89] found id: ""
	I0229 02:24:27.433128  361093 logs.go:276] 1 containers: [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6]
	I0229 02:24:27.433192  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.438777  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:24:27.438849  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:24:27.483522  361093 cri.go:89] found id: "22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:27.483549  361093 cri.go:89] found id: ""
	I0229 02:24:27.483558  361093 logs.go:276] 1 containers: [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d]
	I0229 02:24:27.483617  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.490176  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:24:27.490243  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:24:27.532469  361093 cri.go:89] found id: "fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:27.532487  361093 cri.go:89] found id: ""
	I0229 02:24:27.532494  361093 logs.go:276] 1 containers: [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1]
	I0229 02:24:27.532538  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.537281  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:24:27.537340  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:24:27.576126  361093 cri.go:89] found id: ""
	I0229 02:24:27.576148  361093 logs.go:276] 0 containers: []
	W0229 02:24:27.576159  361093 logs.go:278] No container was found matching "kindnet"
	I0229 02:24:27.576166  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:24:27.576217  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:24:27.615465  361093 cri.go:89] found id: "55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:27.615490  361093 cri.go:89] found id: ""
	I0229 02:24:27.615506  361093 logs.go:276] 1 containers: [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6]
	I0229 02:24:27.615564  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.620302  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:24:27.620360  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:24:27.659108  361093 cri.go:89] found id: "87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:27.659124  361093 cri.go:89] found id: ""
	I0229 02:24:27.659130  361093 logs.go:276] 1 containers: [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac]
	I0229 02:24:27.659172  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.664403  361093 logs.go:123] Gathering logs for kubelet ...
	I0229 02:24:27.664423  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:24:27.734792  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:27.734947  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:27.736060  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:27.736207  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:27.765922  361093 logs.go:123] Gathering logs for dmesg ...
	I0229 02:24:27.765938  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:24:27.785796  361093 logs.go:123] Gathering logs for kube-apiserver [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c] ...
	I0229 02:24:27.785813  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:27.842548  361093 logs.go:123] Gathering logs for etcd [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff] ...
	I0229 02:24:27.842571  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:27.894566  361093 logs.go:123] Gathering logs for kube-controller-manager [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1] ...
	I0229 02:24:27.894593  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:27.958511  361093 logs.go:123] Gathering logs for storage-provisioner [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6] ...
	I0229 02:24:27.958540  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:28.003113  361093 logs.go:123] Gathering logs for container status ...
	I0229 02:24:28.003143  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:24:28.071141  361093 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:24:28.071170  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:24:28.225631  361093 logs.go:123] Gathering logs for coredns [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9] ...
	I0229 02:24:28.225669  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:28.269384  361093 logs.go:123] Gathering logs for kube-scheduler [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6] ...
	I0229 02:24:28.269420  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:28.317580  361093 logs.go:123] Gathering logs for kube-proxy [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d] ...
	I0229 02:24:28.317613  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:28.367251  361093 logs.go:123] Gathering logs for kubernetes-dashboard [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac] ...
	I0229 02:24:28.367281  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:28.406902  361093 logs.go:123] Gathering logs for containerd ...
	I0229 02:24:28.406933  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:24:28.469427  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:28.469451  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:24:28.469508  361093 out.go:239] X Problems detected in kubelet:
	W0229 02:24:28.469521  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:28.469577  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:28.469591  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:28.469600  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:28.469607  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:28.469612  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:24:38.469939  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:24:38.486853  361093 api_server.go:72] duration metric: took 4m14.516525469s to wait for apiserver process to appear ...
	I0229 02:24:38.486879  361093 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:24:38.486925  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:24:38.486978  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:24:38.526577  361093 cri.go:89] found id: "ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:38.526602  361093 cri.go:89] found id: ""
	I0229 02:24:38.526610  361093 logs.go:276] 1 containers: [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c]
	I0229 02:24:38.526666  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.531782  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:24:38.531841  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:24:38.570180  361093 cri.go:89] found id: "305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:38.570201  361093 cri.go:89] found id: ""
	I0229 02:24:38.570208  361093 logs.go:276] 1 containers: [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff]
	I0229 02:24:38.570258  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.574922  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:24:38.574988  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:24:38.613064  361093 cri.go:89] found id: "44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:38.613080  361093 cri.go:89] found id: ""
	I0229 02:24:38.613086  361093 logs.go:276] 1 containers: [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9]
	I0229 02:24:38.613124  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.617452  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:24:38.617498  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:24:38.657879  361093 cri.go:89] found id: "a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:38.657904  361093 cri.go:89] found id: ""
	I0229 02:24:38.657913  361093 logs.go:276] 1 containers: [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6]
	I0229 02:24:38.657969  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.662995  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:24:38.663076  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:24:38.705399  361093 cri.go:89] found id: "22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:38.705429  361093 cri.go:89] found id: ""
	I0229 02:24:38.705439  361093 logs.go:276] 1 containers: [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d]
	I0229 02:24:38.705501  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.710316  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:24:38.710378  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:24:38.750644  361093 cri.go:89] found id: "fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:38.750671  361093 cri.go:89] found id: ""
	I0229 02:24:38.750681  361093 logs.go:276] 1 containers: [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1]
	I0229 02:24:38.750737  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.755297  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:24:38.755352  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:24:38.793540  361093 cri.go:89] found id: ""
	I0229 02:24:38.793557  361093 logs.go:276] 0 containers: []
	W0229 02:24:38.793564  361093 logs.go:278] No container was found matching "kindnet"
	I0229 02:24:38.793570  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:24:38.793610  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:24:38.831104  361093 cri.go:89] found id: "87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:38.831119  361093 cri.go:89] found id: ""
	I0229 02:24:38.831125  361093 logs.go:276] 1 containers: [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac]
	I0229 02:24:38.831160  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.835275  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:24:38.835323  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:24:38.873475  361093 cri.go:89] found id: "55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:38.873493  361093 cri.go:89] found id: ""
	I0229 02:24:38.873500  361093 logs.go:276] 1 containers: [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6]
	I0229 02:24:38.873540  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.878368  361093 logs.go:123] Gathering logs for kube-scheduler [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6] ...
	I0229 02:24:38.878390  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:38.923522  361093 logs.go:123] Gathering logs for kube-proxy [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d] ...
	I0229 02:24:38.923548  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:38.964435  361093 logs.go:123] Gathering logs for container status ...
	I0229 02:24:38.964458  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:24:39.005620  361093 logs.go:123] Gathering logs for kubelet ...
	I0229 02:24:39.005651  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:24:39.073045  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.073209  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.074336  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.074496  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:39.110446  361093 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:24:39.110478  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:24:39.232166  361093 logs.go:123] Gathering logs for kube-apiserver [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c] ...
	I0229 02:24:39.232198  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:39.280691  361093 logs.go:123] Gathering logs for etcd [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff] ...
	I0229 02:24:39.280722  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:39.333042  361093 logs.go:123] Gathering logs for storage-provisioner [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6] ...
	I0229 02:24:39.333075  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:39.376476  361093 logs.go:123] Gathering logs for containerd ...
	I0229 02:24:39.376511  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:24:39.460706  361093 logs.go:123] Gathering logs for dmesg ...
	I0229 02:24:39.460753  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:24:39.478278  361093 logs.go:123] Gathering logs for coredns [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9] ...
	I0229 02:24:39.478312  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:39.520503  361093 logs.go:123] Gathering logs for kube-controller-manager [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1] ...
	I0229 02:24:39.520540  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:39.585358  361093 logs.go:123] Gathering logs for kubernetes-dashboard [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac] ...
	I0229 02:24:39.585398  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:39.626645  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:39.626675  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:24:39.626752  361093 out.go:239] X Problems detected in kubelet:
	W0229 02:24:39.626765  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.626773  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.626785  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.626799  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:39.626808  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:39.626816  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:24:49.628247  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:24:49.633437  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0229 02:24:49.634869  361093 api_server.go:141] control plane version: v1.28.4
	I0229 02:24:49.634888  361093 api_server.go:131] duration metric: took 11.148001248s to wait for apiserver health ...
	I0229 02:24:49.634899  361093 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:24:49.634928  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:24:49.634996  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:24:49.677174  361093 cri.go:89] found id: "ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:49.677204  361093 cri.go:89] found id: ""
	I0229 02:24:49.677214  361093 logs.go:276] 1 containers: [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c]
	I0229 02:24:49.677292  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.682331  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:24:49.682397  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:24:49.722340  361093 cri.go:89] found id: "305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:49.722363  361093 cri.go:89] found id: ""
	I0229 02:24:49.722370  361093 logs.go:276] 1 containers: [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff]
	I0229 02:24:49.722429  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.727151  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:24:49.727206  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:24:49.771669  361093 cri.go:89] found id: "44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:49.771693  361093 cri.go:89] found id: ""
	I0229 02:24:49.771700  361093 logs.go:276] 1 containers: [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9]
	I0229 02:24:49.771750  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.777043  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:24:49.777091  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:24:49.817045  361093 cri.go:89] found id: "a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:49.817071  361093 cri.go:89] found id: ""
	I0229 02:24:49.817081  361093 logs.go:276] 1 containers: [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6]
	I0229 02:24:49.817130  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.821786  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:24:49.821837  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:24:49.860078  361093 cri.go:89] found id: "22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:49.860110  361093 cri.go:89] found id: ""
	I0229 02:24:49.860119  361093 logs.go:276] 1 containers: [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d]
	I0229 02:24:49.860183  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.866369  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:24:49.866473  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:24:49.915578  361093 cri.go:89] found id: "fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:49.915607  361093 cri.go:89] found id: ""
	I0229 02:24:49.915615  361093 logs.go:276] 1 containers: [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1]
	I0229 02:24:49.915684  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.920846  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:24:49.920932  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:24:49.962645  361093 cri.go:89] found id: ""
	I0229 02:24:49.962671  361093 logs.go:276] 0 containers: []
	W0229 02:24:49.962680  361093 logs.go:278] No container was found matching "kindnet"
	I0229 02:24:49.962687  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:24:49.962740  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:24:50.011096  361093 cri.go:89] found id: "87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:50.011121  361093 cri.go:89] found id: ""
	I0229 02:24:50.011128  361093 logs.go:276] 1 containers: [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac]
	I0229 02:24:50.011178  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:50.016421  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:24:50.016476  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:24:50.063649  361093 cri.go:89] found id: "55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:50.063670  361093 cri.go:89] found id: ""
	I0229 02:24:50.063676  361093 logs.go:276] 1 containers: [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6]
	I0229 02:24:50.063733  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:50.068841  361093 logs.go:123] Gathering logs for etcd [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff] ...
	I0229 02:24:50.068860  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:50.125960  361093 logs.go:123] Gathering logs for coredns [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9] ...
	I0229 02:24:50.125991  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:50.168727  361093 logs.go:123] Gathering logs for kube-controller-manager [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1] ...
	I0229 02:24:50.168762  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:50.240474  361093 logs.go:123] Gathering logs for kubernetes-dashboard [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac] ...
	I0229 02:24:50.240509  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:50.284140  361093 logs.go:123] Gathering logs for kubelet ...
	I0229 02:24:50.284171  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:24:50.348949  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.349117  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.350594  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.350762  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:50.381167  361093 logs.go:123] Gathering logs for dmesg ...
	I0229 02:24:50.381209  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:24:50.397094  361093 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:24:50.397126  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:24:50.526336  361093 logs.go:123] Gathering logs for kube-apiserver [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c] ...
	I0229 02:24:50.526374  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:50.580463  361093 logs.go:123] Gathering logs for kube-scheduler [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6] ...
	I0229 02:24:50.580495  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:50.627952  361093 logs.go:123] Gathering logs for kube-proxy [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d] ...
	I0229 02:24:50.627988  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:50.671981  361093 logs.go:123] Gathering logs for storage-provisioner [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6] ...
	I0229 02:24:50.672014  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:50.711025  361093 logs.go:123] Gathering logs for containerd ...
	I0229 02:24:50.711079  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:24:50.780064  361093 logs.go:123] Gathering logs for container status ...
	I0229 02:24:50.780110  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:24:50.827300  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:50.827326  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:24:50.827392  361093 out.go:239] X Problems detected in kubelet:
	W0229 02:24:50.827407  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.827419  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.827432  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.827443  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:50.827459  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:50.827470  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:00.835010  361093 system_pods.go:59] 8 kube-system pods found
	I0229 02:25:00.835043  361093 system_pods.go:61] "coredns-5dd5756b68-pf9x9" [d22bf48c-c24a-4e0c-8b94-2269b2c1e45e] Running
	I0229 02:25:00.835048  361093 system_pods.go:61] "etcd-embed-certs-665766" [26a6156f-b3e4-4e05-862c-98c77e9ca852] Running
	I0229 02:25:00.835052  361093 system_pods.go:61] "kube-apiserver-embed-certs-665766" [d6b452c8-0a2c-4ba9-bebc-f04625dcfeef] Running
	I0229 02:25:00.835056  361093 system_pods.go:61] "kube-controller-manager-embed-certs-665766" [d2542a5c-ba48-4e5b-b832-f417b7b1f060] Running
	I0229 02:25:00.835059  361093 system_pods.go:61] "kube-proxy-gtjq6" [e0e66d49-0861-4546-8b3a-0ea3f2021769] Running
	I0229 02:25:00.835062  361093 system_pods.go:61] "kube-scheduler-embed-certs-665766" [4e8a17cb-507c-41e8-a326-d88d778f1eea] Running
	I0229 02:25:00.835069  361093 system_pods.go:61] "metrics-server-57f55c9bc5-kdvvw" [b70c8f8c-dd5b-4653-838d-3815d52cc0f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:25:00.835075  361093 system_pods.go:61] "storage-provisioner" [97993825-092f-4d18-aeeb-64fde6ba795e] Running
	I0229 02:25:00.835084  361093 system_pods.go:74] duration metric: took 11.200178346s to wait for pod list to return data ...
	I0229 02:25:00.835095  361093 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:25:00.837666  361093 default_sa.go:45] found service account: "default"
	I0229 02:25:00.837688  361093 default_sa.go:55] duration metric: took 2.584028ms for default service account to be created ...
	I0229 02:25:00.837699  361093 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:25:00.844008  361093 system_pods.go:86] 8 kube-system pods found
	I0229 02:25:00.844031  361093 system_pods.go:89] "coredns-5dd5756b68-pf9x9" [d22bf48c-c24a-4e0c-8b94-2269b2c1e45e] Running
	I0229 02:25:00.844038  361093 system_pods.go:89] "etcd-embed-certs-665766" [26a6156f-b3e4-4e05-862c-98c77e9ca852] Running
	I0229 02:25:00.844043  361093 system_pods.go:89] "kube-apiserver-embed-certs-665766" [d6b452c8-0a2c-4ba9-bebc-f04625dcfeef] Running
	I0229 02:25:00.844050  361093 system_pods.go:89] "kube-controller-manager-embed-certs-665766" [d2542a5c-ba48-4e5b-b832-f417b7b1f060] Running
	I0229 02:25:00.844055  361093 system_pods.go:89] "kube-proxy-gtjq6" [e0e66d49-0861-4546-8b3a-0ea3f2021769] Running
	I0229 02:25:00.844060  361093 system_pods.go:89] "kube-scheduler-embed-certs-665766" [4e8a17cb-507c-41e8-a326-d88d778f1eea] Running
	I0229 02:25:00.844069  361093 system_pods.go:89] "metrics-server-57f55c9bc5-kdvvw" [b70c8f8c-dd5b-4653-838d-3815d52cc0f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:25:00.844076  361093 system_pods.go:89] "storage-provisioner" [97993825-092f-4d18-aeeb-64fde6ba795e] Running
	I0229 02:25:00.844086  361093 system_pods.go:126] duration metric: took 6.380306ms to wait for k8s-apps to be running ...
	I0229 02:25:00.844095  361093 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:25:00.844144  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:25:00.862900  361093 system_svc.go:56] duration metric: took 18.796697ms WaitForService to wait for kubelet.
	I0229 02:25:00.862927  361093 kubeadm.go:581] duration metric: took 4m36.892603056s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:25:00.862952  361093 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:25:00.865826  361093 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:25:00.865846  361093 node_conditions.go:123] node cpu capacity is 2
	I0229 02:25:00.865899  361093 node_conditions.go:105] duration metric: took 2.937756ms to run NodePressure ...
	I0229 02:25:00.865915  361093 start.go:228] waiting for startup goroutines ...
	I0229 02:25:00.865931  361093 start.go:233] waiting for cluster config update ...
	I0229 02:25:00.865971  361093 start.go:242] writing updated cluster config ...
	I0229 02:25:00.866301  361093 ssh_runner.go:195] Run: rm -f paused
	I0229 02:25:00.917044  361093 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:25:00.920135  361093 out.go:177] * Done! kubectl is now configured to use "embed-certs-665766" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> containerd <==
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165624877Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165698335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165745697Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165787935Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165917244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165968270Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166006973Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166044436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166543615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseR
untimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMiss
ingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166717042Z" level=info msg="Connect containerd service"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166807336Z" level=info msg="using legacy CRI server"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166857305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166925237Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.168440964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169467518Z" level=info msg="Start subscribing containerd event"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169852898Z" level=info msg="Start recovering state"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169759950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.170354766Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216434996Z" level=info msg="Start event monitor"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216570893Z" level=info msg="Start snapshots syncer"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216584766Z" level=info msg="Start cni network conf syncer for default"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216590881Z" level=info msg="Start streaming server"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216768197Z" level=info msg="containerd successfully booted in 0.090655s"
	Feb 29 02:18:50 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:18:50.110070145Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/87-podman-bridge.conflist.mk_disabled\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 02:18:50 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:18:50.110410570Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/.keep\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 02:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.634203] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.396865] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.706137] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.508894] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +0.058297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061765] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +0.223600] systemd-fstab-generator[503]: Ignoring "noauto" option for root device
	[  +0.145548] systemd-fstab-generator[515]: Ignoring "noauto" option for root device
	[  +0.315865] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +6.792896] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.059937] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.232197] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.066766] kauditd_printk_skb: 18 callbacks suppressed
	[Feb29 02:18] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.063045] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 02:20] systemd-fstab-generator[9666]: Ignoring "noauto" option for root device
	[  +0.073310] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:31:48 up 17 min,  0 users,  load average: 0.00, 0.08, 0.12
	Linux old-k8s-version-254968 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:31:46 old-k8s-version-254968 kubelet[18981]: F0229 02:31:46.200906   18981 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:31:46 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:31:46 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:31:46 old-k8s-version-254968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 879.
	Feb 29 02:31:46 old-k8s-version-254968 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:31:46 old-k8s-version-254968 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:31:46 old-k8s-version-254968 kubelet[18994]: I0229 02:31:46.902478   18994 server.go:410] Version: v1.16.0
	Feb 29 02:31:46 old-k8s-version-254968 kubelet[18994]: I0229 02:31:46.902980   18994 plugins.go:100] No cloud provider specified.
	Feb 29 02:31:46 old-k8s-version-254968 kubelet[18994]: I0229 02:31:46.903041   18994 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:31:46 old-k8s-version-254968 kubelet[18994]: I0229 02:31:46.905644   18994 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:31:46 old-k8s-version-254968 kubelet[18994]: W0229 02:31:46.906586   18994 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:31:46 old-k8s-version-254968 kubelet[18994]: F0229 02:31:46.906674   18994 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:31:46 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:31:46 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:31:47 old-k8s-version-254968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 880.
	Feb 29 02:31:47 old-k8s-version-254968 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:31:47 old-k8s-version-254968 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:31:47 old-k8s-version-254968 kubelet[19019]: I0229 02:31:47.730778   19019 server.go:410] Version: v1.16.0
	Feb 29 02:31:47 old-k8s-version-254968 kubelet[19019]: I0229 02:31:47.731022   19019 plugins.go:100] No cloud provider specified.
	Feb 29 02:31:47 old-k8s-version-254968 kubelet[19019]: I0229 02:31:47.731039   19019 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:31:47 old-k8s-version-254968 kubelet[19019]: I0229 02:31:47.733889   19019 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:31:47 old-k8s-version-254968 kubelet[19019]: W0229 02:31:47.734891   19019 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:31:47 old-k8s-version-254968 kubelet[19019]: F0229 02:31:47.734987   19019 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:31:47 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:31:47 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (247.087506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-254968" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (354.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:31:52.590505  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:32:37.675433  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:32:57.969078  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:33:46.416590  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:34:18.529856  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:34:28.597065  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:35:17.835354  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:35:48.921842  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:35:54.098689  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/default-k8s-diff-port-254367/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:35:54.492908  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/no-preload-907398/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:36:05.889759  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:36:14.620735  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:36:52.590123  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:37:17.144125  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/default-k8s-diff-port-254367/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0229 02:37:17.536305  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/no-preload-907398/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (265.53462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-254968" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-254968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-254968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.52µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-254968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (246.782535ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-254968 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-254968 logs -n 25: (1.141608905s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-254367       | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-665766            | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-254968                              | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-254968             | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-254968                              | old-k8s-version-254968       | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-665766                 | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | no-preload-907398 image list                           | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	| delete  | -p no-preload-907398                                   | no-preload-907398            | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	| image   | default-k8s-diff-port-254367                           | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-254367 | jenkins | v1.32.0 | 29 Feb 24 02:18 UTC | 29 Feb 24 02:18 UTC |
	|         | default-k8s-diff-port-254367                           |                              |         |         |                     |                     |
	| image   | embed-certs-665766 image list                          | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| delete  | -p embed-certs-665766                                  | embed-certs-665766           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:15:00
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:15:00.195513  361093 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:15:00.195780  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:15:00.195791  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:15:00.195798  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:15:00.196014  361093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:15:00.196538  361093 out.go:298] Setting JSON to false
	I0229 02:15:00.197510  361093 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7044,"bootTime":1709165856,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:15:00.197578  361093 start.go:139] virtualization: kvm guest
	I0229 02:15:00.199670  361093 out.go:177] * [embed-certs-665766] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:15:00.201014  361093 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:15:00.202314  361093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:15:00.201016  361093 notify.go:220] Checking for updates...
	I0229 02:15:00.204683  361093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:15:00.205981  361093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:15:00.207104  361093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:15:00.208151  361093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:15:00.209800  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:15:00.210427  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.210478  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.226129  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35133
	I0229 02:15:00.226543  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.227211  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.227260  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.227606  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.227858  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.228153  361093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:15:00.228600  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.228648  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.244111  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0229 02:15:00.244523  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.244927  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.244955  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.245291  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.245488  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.279319  361093 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:15:00.280565  361093 start.go:299] selected driver: kvm2
	I0229 02:15:00.280576  361093 start.go:903] validating driver "kvm2" against &{Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:00.280689  361093 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:15:00.281579  361093 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:15:00.281718  361093 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:15:00.296404  361093 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:15:00.296764  361093 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:15:00.296834  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:00.296847  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:00.296856  361093 start_flags.go:323] config:
	{Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:00.296993  361093 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:15:00.298652  361093 out.go:177] * Starting control plane node embed-certs-665766 in cluster embed-certs-665766
	I0229 02:15:00.299785  361093 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:15:00.299837  361093 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 02:15:00.299848  361093 cache.go:56] Caching tarball of preloaded images
	I0229 02:15:00.299924  361093 preload.go:174] Found /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:15:00.299936  361093 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 02:15:00.300040  361093 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/config.json ...
	I0229 02:15:00.300211  361093 start.go:365] acquiring machines lock for embed-certs-665766: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:15:00.300253  361093 start.go:369] acquired machines lock for "embed-certs-665766" in 22.524µs
	I0229 02:15:00.300268  361093 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:15:00.300281  361093 fix.go:54] fixHost starting: 
	I0229 02:15:00.300618  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:15:00.300658  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:15:00.315579  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0229 02:15:00.315993  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:15:00.316460  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:15:00.316481  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:15:00.316776  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:15:00.317012  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:00.317164  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:15:00.318770  361093 fix.go:102] recreateIfNeeded on embed-certs-665766: state=Stopped err=<nil>
	I0229 02:15:00.318802  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	W0229 02:15:00.318984  361093 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:15:00.320597  361093 out.go:177] * Restarting existing kvm2 VM for "embed-certs-665766" ...
	I0229 02:14:57.672798  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.172654  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.673282  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.173312  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:59.672878  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.172953  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:00.673170  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.173005  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:01.672595  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:02.172649  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:14:58.736314  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:00.738234  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:02.738646  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:14:59.777395  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:01.781443  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:00.321860  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Start
	I0229 02:15:00.322009  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring networks are active...
	I0229 02:15:00.322780  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring network default is active
	I0229 02:15:00.323102  361093 main.go:141] libmachine: (embed-certs-665766) Ensuring network mk-embed-certs-665766 is active
	I0229 02:15:00.323540  361093 main.go:141] libmachine: (embed-certs-665766) Getting domain xml...
	I0229 02:15:00.324206  361093 main.go:141] libmachine: (embed-certs-665766) Creating domain...
	I0229 02:15:01.564400  361093 main.go:141] libmachine: (embed-certs-665766) Waiting to get IP...
	I0229 02:15:01.565163  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:01.565606  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:01.565665  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:01.565569  361128 retry.go:31] will retry after 283.275743ms: waiting for machine to come up
	I0229 02:15:01.850148  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:01.850742  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:01.850796  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:01.850687  361128 retry.go:31] will retry after 252.966549ms: waiting for machine to come up
	I0229 02:15:02.105129  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:02.105699  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:02.105732  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:02.105660  361128 retry.go:31] will retry after 470.28664ms: waiting for machine to come up
	I0229 02:15:02.577216  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:02.577778  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:02.577807  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:02.577721  361128 retry.go:31] will retry after 527.191742ms: waiting for machine to come up
	I0229 02:15:03.106209  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:03.106698  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:03.106725  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:03.106650  361128 retry.go:31] will retry after 472.107889ms: waiting for machine to come up
	I0229 02:15:03.580375  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:03.580945  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:03.580972  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:03.580890  361128 retry.go:31] will retry after 683.066759ms: waiting for machine to come up
	I0229 02:15:04.265769  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:04.266340  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:04.266370  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:04.266282  361128 retry.go:31] will retry after 1.031418978s: waiting for machine to come up
	I0229 02:15:02.673169  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.173251  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:03.672864  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.173580  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:04.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.173278  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.672747  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.173514  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:06.672853  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:07.173295  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:05.238704  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:07.736326  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:04.278766  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:06.779170  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:05.299213  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:05.299740  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:05.299773  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:05.299673  361128 retry.go:31] will retry after 1.037425014s: waiting for machine to come up
	I0229 02:15:06.339189  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:06.339656  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:06.339688  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:06.339607  361128 retry.go:31] will retry after 1.829261156s: waiting for machine to come up
	I0229 02:15:08.171250  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:08.171913  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:08.171940  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:08.171868  361128 retry.go:31] will retry after 1.840049442s: waiting for machine to come up
	I0229 02:15:10.015035  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:10.015601  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:10.015624  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:10.015545  361128 retry.go:31] will retry after 2.792261425s: waiting for machine to come up
	I0229 02:15:07.673496  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.173235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:08.672970  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.173203  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:09.672669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.172971  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.673523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.172857  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:11.672596  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:12.173541  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:10.236392  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:12.241873  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:09.277845  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:11.280119  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:13.777454  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:12.811472  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:12.812070  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:12.812092  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:12.812028  361128 retry.go:31] will retry after 3.422816729s: waiting for machine to come up
	I0229 02:15:12.673205  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.173523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:13.672774  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.173115  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.673616  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.172831  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:15.673160  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:16.673287  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:17.172640  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:14.243740  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:16.736133  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:15.778484  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:17.778658  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:16.236374  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:16.236943  361093 main.go:141] libmachine: (embed-certs-665766) DBG | unable to find current IP address of domain embed-certs-665766 in network mk-embed-certs-665766
	I0229 02:15:16.236973  361093 main.go:141] libmachine: (embed-certs-665766) DBG | I0229 02:15:16.236905  361128 retry.go:31] will retry after 3.865566322s: waiting for machine to come up
	I0229 02:15:20.106964  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.107455  361093 main.go:141] libmachine: (embed-certs-665766) Found IP for machine: 192.168.39.252
	I0229 02:15:20.107480  361093 main.go:141] libmachine: (embed-certs-665766) Reserving static IP address...
	I0229 02:15:20.107494  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has current primary IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.107964  361093 main.go:141] libmachine: (embed-certs-665766) Reserved static IP address: 192.168.39.252
	I0229 02:15:20.107994  361093 main.go:141] libmachine: (embed-certs-665766) Waiting for SSH to be available...
	I0229 02:15:20.108041  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "embed-certs-665766", mac: "52:54:00:0f:ed:e3", ip: "192.168.39.252"} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.108074  361093 main.go:141] libmachine: (embed-certs-665766) DBG | skip adding static IP to network mk-embed-certs-665766 - found existing host DHCP lease matching {name: "embed-certs-665766", mac: "52:54:00:0f:ed:e3", ip: "192.168.39.252"}
	I0229 02:15:20.108095  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Getting to WaitForSSH function...
	I0229 02:15:20.110175  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.110485  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.110511  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.110667  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Using SSH client type: external
	I0229 02:15:20.110696  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa (-rw-------)
	I0229 02:15:20.110761  361093 main.go:141] libmachine: (embed-certs-665766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:15:20.110788  361093 main.go:141] libmachine: (embed-certs-665766) DBG | About to run SSH command:
	I0229 02:15:20.110807  361093 main.go:141] libmachine: (embed-certs-665766) DBG | exit 0
	I0229 02:15:17.672587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.173318  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:18.673512  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.172966  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.673611  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.172605  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:20.672736  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.173587  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:21.673298  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:22.172625  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:19.238381  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:21.736665  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:20.246600  361093 main.go:141] libmachine: (embed-certs-665766) DBG | SSH cmd err, output: <nil>: 
	I0229 02:15:20.247008  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetConfigRaw
	I0229 02:15:20.247628  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:20.250151  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.250492  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.250524  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.250769  361093 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/config.json ...
	I0229 02:15:20.251020  361093 machine.go:88] provisioning docker machine ...
	I0229 02:15:20.251044  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:20.251255  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.251442  361093 buildroot.go:166] provisioning hostname "embed-certs-665766"
	I0229 02:15:20.251465  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.251607  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.253793  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.254144  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.254176  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.254345  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.254528  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.254701  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.254886  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.255075  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:20.255290  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:20.255302  361093 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-665766 && echo "embed-certs-665766" | sudo tee /etc/hostname
	I0229 02:15:20.387006  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-665766
	
	I0229 02:15:20.387037  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.389660  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.390034  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.390075  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.390263  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.390512  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.390720  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.390846  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.391013  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:20.391195  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:20.391212  361093 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-665766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-665766/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-665766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:15:20.517065  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:15:20.517117  361093 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
	I0229 02:15:20.517171  361093 buildroot.go:174] setting up certificates
	I0229 02:15:20.517189  361093 provision.go:83] configureAuth start
	I0229 02:15:20.517207  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetMachineName
	I0229 02:15:20.517534  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:20.520639  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.521028  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.521062  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.521231  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.523702  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.524078  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.524128  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.524228  361093 provision.go:138] copyHostCerts
	I0229 02:15:20.524293  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
	I0229 02:15:20.524319  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
	I0229 02:15:20.524405  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
	I0229 02:15:20.524527  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
	I0229 02:15:20.524537  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
	I0229 02:15:20.524583  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
	I0229 02:15:20.524674  361093 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
	I0229 02:15:20.524684  361093 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
	I0229 02:15:20.524718  361093 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
	I0229 02:15:20.524803  361093 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-665766 san=[192.168.39.252 192.168.39.252 localhost 127.0.0.1 minikube embed-certs-665766]
	I0229 02:15:20.822225  361093 provision.go:172] copyRemoteCerts
	I0229 02:15:20.822298  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:15:20.822346  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:20.825396  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.825833  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:20.825863  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:20.826114  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:20.826349  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:20.826496  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:20.826626  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:20.915099  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:15:20.942985  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:15:20.974642  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:15:21.002039  361093 provision.go:86] duration metric: configureAuth took 484.832048ms
	I0229 02:15:21.002101  361093 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:15:21.002327  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:15:21.002341  361093 machine.go:91] provisioned docker machine in 751.30636ms
	I0229 02:15:21.002350  361093 start.go:300] post-start starting for "embed-certs-665766" (driver="kvm2")
	I0229 02:15:21.002361  361093 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:15:21.002433  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.002803  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:15:21.002843  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.005633  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.006105  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.006141  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.006336  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.006562  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.006784  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.006972  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.094951  361093 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:15:21.100607  361093 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:15:21.100637  361093 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
	I0229 02:15:21.100736  361093 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
	I0229 02:15:21.100864  361093 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
	I0229 02:15:21.101000  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:15:21.113280  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:15:21.142831  361093 start.go:303] post-start completed in 140.464811ms
	I0229 02:15:21.142864  361093 fix.go:56] fixHost completed within 20.842581853s
	I0229 02:15:21.142977  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.145855  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.146221  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.146273  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.146427  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.146675  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.146826  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.146946  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.147137  361093 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:21.147306  361093 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0229 02:15:21.147316  361093 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:15:21.267552  361093 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172921.247201349
	
	I0229 02:15:21.267579  361093 fix.go:206] guest clock: 1709172921.247201349
	I0229 02:15:21.267590  361093 fix.go:219] Guest: 2024-02-29 02:15:21.247201349 +0000 UTC Remote: 2024-02-29 02:15:21.142869918 +0000 UTC m=+21.001592109 (delta=104.331431ms)
	I0229 02:15:21.267644  361093 fix.go:190] guest clock delta is within tolerance: 104.331431ms
	I0229 02:15:21.267653  361093 start.go:83] releasing machines lock for "embed-certs-665766", held for 20.967392077s
	I0229 02:15:21.267681  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.267949  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:21.270730  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.271194  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.271223  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.271559  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272366  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272582  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:15:21.272673  361093 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:15:21.272718  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.272844  361093 ssh_runner.go:195] Run: cat /version.json
	I0229 02:15:21.272867  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:15:21.276061  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276385  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276515  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.276563  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276647  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:21.276673  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:21.276693  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.276843  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:15:21.276926  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.277031  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:15:21.277103  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.277160  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:15:21.277254  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.277316  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:15:21.380428  361093 ssh_runner.go:195] Run: systemctl --version
	I0229 02:15:21.387150  361093 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:15:21.393537  361093 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:15:21.393595  361093 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:15:21.411579  361093 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:15:21.411609  361093 start.go:475] detecting cgroup driver to use...
	I0229 02:15:21.411682  361093 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:15:21.442122  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:21.457974  361093 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:15:21.458041  361093 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:15:21.474421  361093 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:15:21.490462  361093 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:15:21.618342  361093 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:15:21.802579  361093 docker.go:233] disabling docker service ...
	I0229 02:15:21.802649  361093 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:15:21.818349  361093 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:15:21.832338  361093 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:15:21.975684  361093 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:15:22.118703  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:15:22.134525  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:22.155421  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:15:22.166809  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:15:22.180082  361093 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:15:22.180163  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:15:22.195414  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:22.206812  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:15:22.217930  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:22.229893  361093 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:15:22.244345  361093 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:15:22.255766  361093 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:15:22.265968  361093 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:15:22.266042  361093 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:15:22.280500  361093 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:15:22.290749  361093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:22.447260  361093 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:15:22.489965  361093 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 02:15:22.490049  361093 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:15:22.495946  361093 retry.go:31] will retry after 681.640314ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 02:15:23.178613  361093 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 02:15:23.186465  361093 start.go:543] Will wait 60s for crictl version
	I0229 02:15:23.186531  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:15:23.191421  361093 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:15:23.240728  361093 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 02:15:23.240833  361093 ssh_runner.go:195] Run: containerd --version
	I0229 02:15:23.271700  361093 ssh_runner.go:195] Run: containerd --version
	I0229 02:15:23.311413  361093 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 02:15:20.278855  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:22.776938  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:23.312543  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetIP
	I0229 02:15:23.315197  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:23.315505  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:15:23.315541  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:15:23.315774  361093 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:15:23.321091  361093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:23.335366  361093 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 02:15:23.335482  361093 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:15:23.380351  361093 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:15:23.380391  361093 containerd.go:519] Images already preloaded, skipping extraction
	I0229 02:15:23.380462  361093 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:15:23.421267  361093 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 02:15:23.421295  361093 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:15:23.421374  361093 ssh_runner.go:195] Run: sudo crictl info
	I0229 02:15:23.460765  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:23.460802  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:23.460841  361093 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:15:23.460868  361093 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.252 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-665766 NodeName:embed-certs-665766 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:15:23.461060  361093 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-665766"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.252
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:15:23.461154  361093 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-665766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:15:23.461223  361093 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:15:23.472810  361093 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:15:23.472873  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:15:23.483214  361093 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (392 bytes)
	I0229 02:15:23.502301  361093 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:15:23.522993  361093 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0229 02:15:23.543866  361093 ssh_runner.go:195] Run: grep 192.168.39.252	control-plane.minikube.internal$ /etc/hosts
	I0229 02:15:23.548448  361093 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:23.561909  361093 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766 for IP: 192.168.39.252
	I0229 02:15:23.561962  361093 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:23.562164  361093 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
	I0229 02:15:23.562207  361093 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
	I0229 02:15:23.562316  361093 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/client.key
	I0229 02:15:23.562390  361093 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.key.ba3365be
	I0229 02:15:23.562442  361093 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.key
	I0229 02:15:23.562597  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
	W0229 02:15:23.562642  361093 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
	I0229 02:15:23.562657  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:15:23.562691  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:15:23.562725  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:15:23.562747  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
	I0229 02:15:23.562787  361093 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
	I0229 02:15:23.563460  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:15:23.592672  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:15:23.620893  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:15:23.648810  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/embed-certs-665766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:15:23.677012  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:15:23.704430  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:15:23.736296  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:15:23.765295  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:15:23.796388  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:15:23.824848  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
	I0229 02:15:23.852786  361093 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
	I0229 02:15:23.882944  361093 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:15:23.907836  361093 ssh_runner.go:195] Run: openssl version
	I0229 02:15:23.916052  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:15:23.930370  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.937378  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.937461  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:23.944482  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:15:23.956702  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
	I0229 02:15:23.968559  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.974129  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.974207  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
	I0229 02:15:23.980916  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
	I0229 02:15:23.993131  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
	I0229 02:15:24.005391  361093 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.010645  361093 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.010717  361093 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
	I0229 02:15:24.017160  361093 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:15:24.029150  361093 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:15:24.033893  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:15:24.040509  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:15:24.047587  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:15:24.054651  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:15:24.061675  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:15:24.068724  361093 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:15:24.075815  361093 kubeadm.go:404] StartCluster: {Name:embed-certs-665766 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-665766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:24.075975  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 02:15:24.076030  361093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:15:24.117750  361093 cri.go:89] found id: "b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549"
	I0229 02:15:24.117784  361093 cri.go:89] found id: "42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630"
	I0229 02:15:24.117789  361093 cri.go:89] found id: "88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662"
	I0229 02:15:24.117793  361093 cri.go:89] found id: "a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348"
	I0229 02:15:24.117797  361093 cri.go:89] found id: "b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb"
	I0229 02:15:24.117806  361093 cri.go:89] found id: "05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4"
	I0229 02:15:24.117810  361093 cri.go:89] found id: "2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd"
	I0229 02:15:24.117814  361093 cri.go:89] found id: "8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3"
	I0229 02:15:24.117820  361093 cri.go:89] found id: ""
	I0229 02:15:24.117872  361093 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0229 02:15:24.132769  361093 cri.go:116] JSON = null
	W0229 02:15:24.132821  361093 kubeadm.go:411] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0229 02:15:24.132878  361093 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:15:24.143554  361093 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:15:24.143571  361093 kubeadm.go:636] restartCluster start
	I0229 02:15:24.143614  361093 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:15:24.154226  361093 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:24.154952  361093 kubeconfig.go:135] verify returned: extract IP: "embed-certs-665766" does not appear in /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:15:24.155312  361093 kubeconfig.go:146] "embed-certs-665766" context is missing from /home/jenkins/minikube-integration/18063-309085/kubeconfig - will repair!
	I0229 02:15:24.155887  361093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:24.157235  361093 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:15:24.167314  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:24.167357  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:24.183158  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:24.667580  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:24.667698  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:24.684726  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:25.168335  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:25.168431  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:25.186032  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:22.672998  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.173387  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.673270  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.173552  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:24.673074  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.173423  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:25.673502  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.173531  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:26.672644  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:27.173372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:23.737162  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:26.235726  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:24.782276  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:27.278368  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:25.667972  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:25.668059  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:25.683528  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:26.168096  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:26.168217  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:26.187348  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:26.667839  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:26.667920  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:26.681557  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.168163  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:27.168262  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:27.182779  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.667408  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:27.667531  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:27.685526  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:28.167636  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:28.167744  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:28.182746  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:28.668333  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:28.668407  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:28.682544  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:29.168119  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:29.168237  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:29.186304  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:29.667836  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:29.667914  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:29.682884  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:30.167618  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:30.167731  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:30.183089  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:27.672738  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.173326  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.673063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.173178  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:29.673323  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.173306  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:30.673429  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.172889  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:31.672643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:32.173215  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:28.239896  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:30.735621  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:32.736326  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:29.278986  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:31.777035  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:33.777456  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:30.667487  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:30.667592  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:30.685344  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:31.167811  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:31.167925  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:31.185254  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:31.667737  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:31.667837  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:31.681151  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:32.167727  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:32.167846  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:32.188215  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:32.667436  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:32.667540  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:32.683006  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:33.167461  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:33.167553  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:33.180891  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:33.667404  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:33.667497  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:33.686220  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:34.167884  361093 api_server.go:166] Checking apiserver status ...
	I0229 02:15:34.167985  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:15:34.181808  361093 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:15:34.181848  361093 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:15:34.181863  361093 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:15:34.181878  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0229 02:15:34.181945  361093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:15:34.226002  361093 cri.go:89] found id: "b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549"
	I0229 02:15:34.226036  361093 cri.go:89] found id: "42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630"
	I0229 02:15:34.226043  361093 cri.go:89] found id: "88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662"
	I0229 02:15:34.226048  361093 cri.go:89] found id: "a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348"
	I0229 02:15:34.226052  361093 cri.go:89] found id: "b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb"
	I0229 02:15:34.226058  361093 cri.go:89] found id: "05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4"
	I0229 02:15:34.226062  361093 cri.go:89] found id: "2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd"
	I0229 02:15:34.226067  361093 cri.go:89] found id: "8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3"
	I0229 02:15:34.226072  361093 cri.go:89] found id: ""
	I0229 02:15:34.226101  361093 cri.go:234] Stopping containers: [b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549 42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630 88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662 a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348 b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb 05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4 2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd 8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3]
	I0229 02:15:34.226179  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:15:34.230963  361093 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 b8995b533d655528c2a8eb0c53c0fbb42bbcbe14783dd91f6cd5138bba8e2549 42aa7bcbeb0511f905d4e224009d29ed4c569f9cd550d41c85ba6f1ba223c630 88e69487fd9180cc2d5f9ec0208eac8913eadd02ba14d3b34ced6bbfeb665662 a41584589c6bb92714424efe15054130b4ceb896f3156efa2a7766d6b59d9348 b25459b6c6752189380e056e8069c48978248a2010e3dcf020d4fb1d86ede5bb 05238b2e56a497eb44499d65d50fbd05b37efc25e20c7faac8ad0a3850f903a4 2211513b5227406a5b1d1ef326c875559d34343ee2fbbcaed625e049fb7ed1dd 8e4fbd5378900f0e966b2c43bb36dc6420963546aef89d8c08eff3b48520a5b3
	I0229 02:15:34.280013  361093 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:15:34.303092  361093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:15:34.313538  361093 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:34.313601  361093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:34.324217  361093 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:34.324245  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:34.474732  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:32.672712  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.172874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:33.672874  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.173296  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:34.673021  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.172643  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.672743  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.172648  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.673171  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.172582  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:35.237112  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:37.240703  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:35.779547  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:37.779743  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:35.326453  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.551798  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.634250  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:35.722113  361093 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:15:35.722208  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.222305  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:36.723392  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.223304  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:37.251520  361093 api_server.go:72] duration metric: took 1.52940545s to wait for apiserver process to appear ...
	I0229 02:15:37.251556  361093 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:15:37.251583  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:37.252131  361093 api_server.go:269] stopped: https://192.168.39.252:8443/healthz: Get "https://192.168.39.252:8443/healthz": dial tcp 192.168.39.252:8443: connect: connection refused
	I0229 02:15:37.751668  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.172368  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:15:40.172411  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:15:40.172431  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.219812  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:15:40.219848  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:15:40.251758  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.277955  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:40.277987  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:40.751985  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:40.760486  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:40.760517  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:41.252018  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:41.266211  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:15:41.266256  361093 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:15:41.751788  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:15:41.761815  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0229 02:15:41.772061  361093 api_server.go:141] control plane version: v1.28.4
	I0229 02:15:41.772105  361093 api_server.go:131] duration metric: took 4.520539314s to wait for apiserver health ...
	I0229 02:15:41.772119  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:15:41.772128  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:15:41.774160  361093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:15:37.672994  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.172969  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:38.673225  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.173291  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.673458  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.172766  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:40.672830  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.173174  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:41.672618  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:42.172606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:39.735965  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:41.737511  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:40.280036  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:42.777915  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:41.775526  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:15:41.792000  361093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:15:41.824077  361093 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:15:41.837796  361093 system_pods.go:59] 8 kube-system pods found
	I0229 02:15:41.837831  361093 system_pods.go:61] "coredns-5dd5756b68-jg9n5" [138dcd77-9fb3-4537-9459-87349af766d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:15:41.837839  361093 system_pods.go:61] "etcd-embed-certs-665766" [039cfea9-3fcf-4a51-85b9-63c0977c701f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:15:41.837847  361093 system_pods.go:61] "kube-apiserver-embed-certs-665766" [6cb7255e-9e43-4b01-a138-34734a11139b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:15:41.837854  361093 system_pods.go:61] "kube-controller-manager-embed-certs-665766" [aa50c4f2-0528-4366-bc5c-4b625ddbb3cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:15:41.837862  361093 system_pods.go:61] "kube-proxy-xctbw" [ab0177e6-72c5-4bdf-a6b4-fa28d0a500eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:15:41.837867  361093 system_pods.go:61] "kube-scheduler-embed-certs-665766" [0013ea0f-3fa3-426e-8e0f-709889bb7239] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:15:41.837873  361093 system_pods.go:61] "metrics-server-57f55c9bc5-9sdkl" [5d0edfb3-db05-4877-b2e1-b7dda944ee2e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:15:41.837878  361093 system_pods.go:61] "storage-provisioner" [1bfb386b-a55e-47c2-873c-894fb156094f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:15:41.837885  361093 system_pods.go:74] duration metric: took 13.782999ms to wait for pod list to return data ...
	I0229 02:15:41.837894  361093 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:15:41.846499  361093 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:15:41.846534  361093 node_conditions.go:123] node cpu capacity is 2
	I0229 02:15:41.846549  361093 node_conditions.go:105] duration metric: took 8.649228ms to run NodePressure ...
	I0229 02:15:41.846602  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:15:42.233849  361093 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:15:42.244135  361093 kubeadm.go:787] kubelet initialised
	I0229 02:15:42.244157  361093 kubeadm.go:788] duration metric: took 10.283459ms waiting for restarted kubelet to initialise ...
	I0229 02:15:42.244165  361093 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:15:42.251055  361093 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:44.258993  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:42.673016  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.173406  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.672843  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.173068  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:44.673562  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.172977  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:45.673254  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.172757  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:46.672796  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:47.173606  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:43.738332  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:46.236882  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:44.778794  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:47.278336  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:46.760126  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:48.761905  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:47.673527  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.173283  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:48.673578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:48.673686  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:48.735531  360776 cri.go:89] found id: ""
	I0229 02:15:48.735560  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.735572  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:48.735580  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:48.735665  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:48.777775  360776 cri.go:89] found id: ""
	I0229 02:15:48.777801  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.777812  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:48.777819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:48.777893  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:48.816348  360776 cri.go:89] found id: ""
	I0229 02:15:48.816382  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.816391  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:48.816398  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:48.816466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:48.856576  360776 cri.go:89] found id: ""
	I0229 02:15:48.856627  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.856640  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:48.856648  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:48.856712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:48.896298  360776 cri.go:89] found id: ""
	I0229 02:15:48.896325  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.896333  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:48.896339  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:48.896419  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:48.939474  360776 cri.go:89] found id: ""
	I0229 02:15:48.939523  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.939537  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:48.939545  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:48.939609  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:48.979602  360776 cri.go:89] found id: ""
	I0229 02:15:48.979630  360776 logs.go:276] 0 containers: []
	W0229 02:15:48.979642  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:48.979649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:48.979734  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:49.020455  360776 cri.go:89] found id: ""
	I0229 02:15:49.020485  360776 logs.go:276] 0 containers: []
	W0229 02:15:49.020495  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:49.020505  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:49.020517  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:49.070608  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:49.070653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:49.086878  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:49.086913  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:49.222506  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:49.222532  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:49.222565  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:49.261476  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:49.261507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:51.812576  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:51.828566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:51.828628  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:51.867885  360776 cri.go:89] found id: ""
	I0229 02:15:51.867913  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.867922  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:51.867928  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:51.867999  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:51.910828  360776 cri.go:89] found id: ""
	I0229 02:15:51.910862  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.910872  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:51.910879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:51.910928  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:51.951547  360776 cri.go:89] found id: ""
	I0229 02:15:51.951578  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.951590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:51.951598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:51.951683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:51.992485  360776 cri.go:89] found id: ""
	I0229 02:15:51.992511  360776 logs.go:276] 0 containers: []
	W0229 02:15:51.992519  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:51.992525  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:51.992579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:52.036445  360776 cri.go:89] found id: ""
	I0229 02:15:52.036481  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.036494  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:52.036502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:52.036567  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:52.075247  360776 cri.go:89] found id: ""
	I0229 02:15:52.075279  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.075289  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:52.075298  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:52.075379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:52.117468  360776 cri.go:89] found id: ""
	I0229 02:15:52.117498  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.117507  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:52.117513  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:52.117575  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:52.156923  360776 cri.go:89] found id: ""
	I0229 02:15:52.156953  360776 logs.go:276] 0 containers: []
	W0229 02:15:52.156962  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:52.156972  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:52.156984  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:52.209140  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:52.209181  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:52.224877  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:52.224952  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:52.313049  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:52.313079  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:52.313096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:48.237478  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:50.737111  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.737652  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:49.777365  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:51.778542  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:51.260945  361093 pod_ready.go:102] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.758125  361093 pod_ready.go:92] pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:52.758156  361093 pod_ready.go:81] duration metric: took 10.507075504s waiting for pod "coredns-5dd5756b68-jg9n5" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:52.758168  361093 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:54.767738  361093 pod_ready.go:102] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:52.361468  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:52.361520  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:54.934192  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:54.950604  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:54.950673  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:54.997665  360776 cri.go:89] found id: ""
	I0229 02:15:54.997700  360776 logs.go:276] 0 containers: []
	W0229 02:15:54.997713  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:54.997738  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:54.997824  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:55.043835  360776 cri.go:89] found id: ""
	I0229 02:15:55.043865  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.043878  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:55.043885  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:55.043952  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:55.084745  360776 cri.go:89] found id: ""
	I0229 02:15:55.084773  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.084784  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:55.084793  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:55.084857  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:55.126607  360776 cri.go:89] found id: ""
	I0229 02:15:55.126638  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.126652  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:55.126660  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:55.126723  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:55.168954  360776 cri.go:89] found id: ""
	I0229 02:15:55.168984  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.168997  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:55.169004  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:55.169068  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:55.209769  360776 cri.go:89] found id: ""
	I0229 02:15:55.209802  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.209813  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:55.209819  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:55.209874  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:55.252174  360776 cri.go:89] found id: ""
	I0229 02:15:55.252206  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.252218  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:55.252226  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:55.252280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:55.301449  360776 cri.go:89] found id: ""
	I0229 02:15:55.301483  360776 logs.go:276] 0 containers: []
	W0229 02:15:55.301496  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:55.301508  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:55.301524  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:55.406764  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:55.406785  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:55.406810  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:55.450166  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:55.450213  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:15:55.499652  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:55.499703  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:55.548616  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:55.548665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:54.738939  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:57.236199  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:54.278386  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:56.779465  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:55.767698  361093 pod_ready.go:92] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.767724  361093 pod_ready.go:81] duration metric: took 3.009548645s waiting for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.767733  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.777263  361093 pod_ready.go:92] pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.777303  361093 pod_ready.go:81] duration metric: took 9.561735ms waiting for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.777315  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.785388  361093 pod_ready.go:92] pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.785410  361093 pod_ready.go:81] duration metric: took 8.086257ms waiting for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.785420  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xctbw" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.791419  361093 pod_ready.go:92] pod "kube-proxy-xctbw" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:55.791437  361093 pod_ready.go:81] duration metric: took 6.009783ms waiting for pod "kube-proxy-xctbw" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:55.791448  361093 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:56.799602  361093 pod_ready.go:92] pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:15:56.799631  361093 pod_ready.go:81] duration metric: took 1.008175236s waiting for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:56.799644  361093 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" ...
	I0229 02:15:58.807838  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:58.064634  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:15:58.080287  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:58.080365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:58.119448  360776 cri.go:89] found id: ""
	I0229 02:15:58.119480  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.119492  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:58.119500  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:15:58.119563  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:58.159896  360776 cri.go:89] found id: ""
	I0229 02:15:58.159926  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.159937  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:15:58.159945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:15:58.160009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:58.197746  360776 cri.go:89] found id: ""
	I0229 02:15:58.197774  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.197785  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:15:58.197794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:58.197873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:58.242003  360776 cri.go:89] found id: ""
	I0229 02:15:58.242031  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.242043  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:58.242051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:58.242143  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:58.282762  360776 cri.go:89] found id: ""
	I0229 02:15:58.282795  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.282815  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:58.282823  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:58.282889  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:58.324333  360776 cri.go:89] found id: ""
	I0229 02:15:58.324364  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.324374  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:58.324380  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:58.324436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:58.392279  360776 cri.go:89] found id: ""
	I0229 02:15:58.392308  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.392321  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:58.392329  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:15:58.392390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:15:58.448147  360776 cri.go:89] found id: ""
	I0229 02:15:58.448181  360776 logs.go:276] 0 containers: []
	W0229 02:15:58.448194  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:15:58.448211  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:58.448259  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:58.501620  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:58.501657  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:58.519453  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:58.519486  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:58.595868  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:58.595897  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:15:58.595917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:15:58.630969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:15:58.631004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.181602  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:01.196379  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:01.196456  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:01.237984  360776 cri.go:89] found id: ""
	I0229 02:16:01.238008  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.238019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:01.238028  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:01.238109  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:01.284709  360776 cri.go:89] found id: ""
	I0229 02:16:01.284737  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.284748  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:01.284756  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:01.284829  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:01.328675  360776 cri.go:89] found id: ""
	I0229 02:16:01.328711  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.328724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:01.328732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:01.328787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:01.384088  360776 cri.go:89] found id: ""
	I0229 02:16:01.384118  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.384127  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:01.384133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:01.384182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:01.444582  360776 cri.go:89] found id: ""
	I0229 02:16:01.444617  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.444630  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:01.444638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:01.444709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:01.483202  360776 cri.go:89] found id: ""
	I0229 02:16:01.483237  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.483250  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:01.483258  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:01.483327  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:01.520422  360776 cri.go:89] found id: ""
	I0229 02:16:01.520455  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.520467  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:01.520475  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:01.520545  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:01.558295  360776 cri.go:89] found id: ""
	I0229 02:16:01.558327  360776 logs.go:276] 0 containers: []
	W0229 02:16:01.558336  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:01.558348  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:01.558363  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:01.594473  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:01.594508  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:01.640865  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:01.640906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:01.691693  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:01.691746  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:01.708474  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:01.708507  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:01.788334  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:59.237127  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.237269  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:15:59.278029  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.278662  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:03.280874  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:01.309386  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:03.807534  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:04.288565  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:04.304344  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:04.304435  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:04.364586  360776 cri.go:89] found id: ""
	I0229 02:16:04.364623  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.364635  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:04.364643  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:04.364712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:04.423593  360776 cri.go:89] found id: ""
	I0229 02:16:04.423624  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.423637  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:04.423646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:04.423715  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:04.463437  360776 cri.go:89] found id: ""
	I0229 02:16:04.463471  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.463482  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:04.463491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:04.463553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:04.500526  360776 cri.go:89] found id: ""
	I0229 02:16:04.500550  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.500559  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:04.500565  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:04.500646  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:04.541324  360776 cri.go:89] found id: ""
	I0229 02:16:04.541363  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.541376  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:04.541389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:04.541466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:04.586036  360776 cri.go:89] found id: ""
	I0229 02:16:04.586063  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.586071  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:04.586093  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:04.586221  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:04.624838  360776 cri.go:89] found id: ""
	I0229 02:16:04.624864  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.624873  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:04.624879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:04.624942  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:04.665188  360776 cri.go:89] found id: ""
	I0229 02:16:04.665214  360776 logs.go:276] 0 containers: []
	W0229 02:16:04.665223  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:04.665235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:04.665248  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:04.710572  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:04.710608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:04.759440  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:04.759473  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:04.777220  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:04.777252  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:04.855773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:04.855802  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:04.855820  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:03.736436  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:06.238443  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:05.779438  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:08.279021  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:05.808060  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:08.307721  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:07.391235  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:07.407347  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:07.407424  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:07.456950  360776 cri.go:89] found id: ""
	I0229 02:16:07.456978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.456988  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:07.456994  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:07.457056  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:07.501947  360776 cri.go:89] found id: ""
	I0229 02:16:07.501978  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.501989  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:07.501996  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:07.502055  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:07.543248  360776 cri.go:89] found id: ""
	I0229 02:16:07.543283  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.543296  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:07.543303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:07.543369  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:07.580554  360776 cri.go:89] found id: ""
	I0229 02:16:07.580587  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.580599  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:07.580606  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:07.580674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:07.618930  360776 cri.go:89] found id: ""
	I0229 02:16:07.618955  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.618966  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:07.618974  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:07.619038  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:07.656206  360776 cri.go:89] found id: ""
	I0229 02:16:07.656237  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.656246  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:07.656252  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:07.656312  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:07.692225  360776 cri.go:89] found id: ""
	I0229 02:16:07.692255  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.692266  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:07.692273  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:07.692334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:07.728085  360776 cri.go:89] found id: ""
	I0229 02:16:07.728118  360776 logs.go:276] 0 containers: []
	W0229 02:16:07.728130  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:07.728143  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:07.728161  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:07.744078  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:07.744102  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:07.819861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:07.819891  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:07.819906  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:07.854665  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:07.854694  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:07.899029  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:07.899059  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.449274  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:10.466228  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:10.466305  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:10.516655  360776 cri.go:89] found id: ""
	I0229 02:16:10.516686  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.516699  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:10.516707  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:10.516776  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:10.551194  360776 cri.go:89] found id: ""
	I0229 02:16:10.551222  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.551240  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:10.551247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:10.551309  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:10.586984  360776 cri.go:89] found id: ""
	I0229 02:16:10.587012  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.587021  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:10.587033  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:10.587101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:10.631726  360776 cri.go:89] found id: ""
	I0229 02:16:10.631758  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.631768  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:10.631775  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:10.631831  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:10.673054  360776 cri.go:89] found id: ""
	I0229 02:16:10.673090  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.673102  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:10.673110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:10.673175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:10.716401  360776 cri.go:89] found id: ""
	I0229 02:16:10.716428  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.716437  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:10.716448  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:10.716495  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:10.762425  360776 cri.go:89] found id: ""
	I0229 02:16:10.762451  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.762460  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:10.762465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:10.762523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:10.800934  360776 cri.go:89] found id: ""
	I0229 02:16:10.800959  360776 logs.go:276] 0 containers: []
	W0229 02:16:10.800970  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:10.800981  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:10.800995  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:10.851152  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:10.851178  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:10.865410  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:10.865436  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:10.941654  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:10.941679  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:10.941699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:10.977068  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:10.977099  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:08.736174  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.738304  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.779517  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:13.277888  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:10.308754  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:12.807138  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:14.807518  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:13.524032  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:13.540646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:13.540721  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:13.584696  360776 cri.go:89] found id: ""
	I0229 02:16:13.584727  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.584740  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:13.584748  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:13.584819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:13.620800  360776 cri.go:89] found id: ""
	I0229 02:16:13.620843  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.620852  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:13.620858  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:13.620936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:13.659179  360776 cri.go:89] found id: ""
	I0229 02:16:13.659209  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.659218  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:13.659224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:13.659286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:13.695772  360776 cri.go:89] found id: ""
	I0229 02:16:13.695821  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.695832  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:13.695840  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:13.695902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:13.736870  360776 cri.go:89] found id: ""
	I0229 02:16:13.736895  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.736906  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:13.736913  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:13.736978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:13.782101  360776 cri.go:89] found id: ""
	I0229 02:16:13.782131  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.782143  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:13.782151  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:13.782212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:13.822638  360776 cri.go:89] found id: ""
	I0229 02:16:13.822663  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.822672  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:13.822677  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:13.822741  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:13.861761  360776 cri.go:89] found id: ""
	I0229 02:16:13.861787  360776 logs.go:276] 0 containers: []
	W0229 02:16:13.861798  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:13.861811  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:13.861835  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:13.877464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:13.877494  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:13.955485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:13.955512  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:13.955525  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:13.990560  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:13.990594  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:14.037740  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:14.037780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:16.588097  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:16.603732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:16.603810  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:16.644337  360776 cri.go:89] found id: ""
	I0229 02:16:16.644372  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.644393  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:16.644404  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:16.644474  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:16.687530  360776 cri.go:89] found id: ""
	I0229 02:16:16.687562  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.687575  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:16.687584  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:16.687653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:16.728007  360776 cri.go:89] found id: ""
	I0229 02:16:16.728037  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.728054  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:16.728063  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:16.728125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:16.770904  360776 cri.go:89] found id: ""
	I0229 02:16:16.770952  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.770964  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:16.770973  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:16.771041  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:16.812270  360776 cri.go:89] found id: ""
	I0229 02:16:16.812294  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.812303  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:16.812309  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:16.812358  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:16.854461  360776 cri.go:89] found id: ""
	I0229 02:16:16.854487  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.854495  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:16.854502  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:16.854565  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:16.893048  360776 cri.go:89] found id: ""
	I0229 02:16:16.893081  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.893093  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:16.893102  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:16.893175  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:16.934533  360776 cri.go:89] found id: ""
	I0229 02:16:16.934565  360776 logs.go:276] 0 containers: []
	W0229 02:16:16.934576  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:16.934589  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:16.934608  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:16.949773  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:16.949806  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:17.030457  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:17.030483  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:17.030500  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:17.066911  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:17.066947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:17.141648  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:17.141680  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:13.236967  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:15.736473  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:15.278216  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:17.280028  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:17.307756  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.308255  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.697967  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:19.713729  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:19.713786  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:19.757898  360776 cri.go:89] found id: ""
	I0229 02:16:19.757929  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.757940  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:19.757947  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:19.757998  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:19.807621  360776 cri.go:89] found id: ""
	I0229 02:16:19.807644  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.807652  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:19.807658  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:19.807704  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:19.846030  360776 cri.go:89] found id: ""
	I0229 02:16:19.846060  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.846071  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:19.846089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:19.846157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:19.881842  360776 cri.go:89] found id: ""
	I0229 02:16:19.881870  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.881883  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:19.881892  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:19.881955  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:19.917791  360776 cri.go:89] found id: ""
	I0229 02:16:19.917818  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.917830  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:19.917837  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:19.917922  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:19.954147  360776 cri.go:89] found id: ""
	I0229 02:16:19.954174  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.954186  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:19.954194  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:19.954259  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:19.991466  360776 cri.go:89] found id: ""
	I0229 02:16:19.991495  360776 logs.go:276] 0 containers: []
	W0229 02:16:19.991505  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:19.991512  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:19.991566  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:20.032484  360776 cri.go:89] found id: ""
	I0229 02:16:20.032515  360776 logs.go:276] 0 containers: []
	W0229 02:16:20.032526  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:20.032540  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:20.032556  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:20.084743  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:20.084781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:20.105586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:20.105626  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:20.206486  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:20.206513  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:20.206528  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:20.250720  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:20.250748  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:18.235820  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:20.235852  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.237011  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:19.779151  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.278930  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:21.808852  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:24.307883  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:22.796158  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:22.812126  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:22.812208  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:22.849744  360776 cri.go:89] found id: ""
	I0229 02:16:22.849776  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.849792  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:22.849800  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:22.849865  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:22.891875  360776 cri.go:89] found id: ""
	I0229 02:16:22.891909  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.891921  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:22.891930  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:22.891995  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:22.931754  360776 cri.go:89] found id: ""
	I0229 02:16:22.931789  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.931801  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:22.931809  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:22.931878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:22.979291  360776 cri.go:89] found id: ""
	I0229 02:16:22.979322  360776 logs.go:276] 0 containers: []
	W0229 02:16:22.979340  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:22.979349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:22.979437  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:23.028390  360776 cri.go:89] found id: ""
	I0229 02:16:23.028416  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.028424  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:23.028430  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:23.028498  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:23.077140  360776 cri.go:89] found id: ""
	I0229 02:16:23.077174  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.077187  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:23.077202  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:23.077274  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:23.124275  360776 cri.go:89] found id: ""
	I0229 02:16:23.124316  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.124326  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:23.124333  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:23.124386  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:23.188748  360776 cri.go:89] found id: ""
	I0229 02:16:23.188789  360776 logs.go:276] 0 containers: []
	W0229 02:16:23.188801  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:23.188815  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:23.188833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:23.247833  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:23.247863  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:23.263866  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:23.263891  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:23.347825  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:23.347851  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:23.347869  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:23.383517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:23.383549  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:25.925662  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:25.940548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:25.940604  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:25.977087  360776 cri.go:89] found id: ""
	I0229 02:16:25.977107  360776 logs.go:276] 0 containers: []
	W0229 02:16:25.977116  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:25.977149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:25.977230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:26.018569  360776 cri.go:89] found id: ""
	I0229 02:16:26.018602  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.018615  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:26.018623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:26.018682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:26.057726  360776 cri.go:89] found id: ""
	I0229 02:16:26.057754  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.057773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:26.057782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:26.057838  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:26.097203  360776 cri.go:89] found id: ""
	I0229 02:16:26.097234  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.097247  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:26.097256  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:26.097322  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:26.141897  360776 cri.go:89] found id: ""
	I0229 02:16:26.141925  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.141941  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:26.141948  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:26.142009  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:26.195074  360776 cri.go:89] found id: ""
	I0229 02:16:26.195101  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.195110  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:26.195117  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:26.195176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:26.252131  360776 cri.go:89] found id: ""
	I0229 02:16:26.252158  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.252166  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:26.252172  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:26.252249  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:26.292730  360776 cri.go:89] found id: ""
	I0229 02:16:26.292752  360776 logs.go:276] 0 containers: []
	W0229 02:16:26.292760  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:26.292770  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:26.292781  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:26.375138  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:26.375165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:26.375182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:26.410167  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:26.410196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:26.453622  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:26.453665  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:26.503732  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:26.503762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:24.740152  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:27.236389  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:24.777323  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:26.778399  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:28.779480  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:26.308285  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:28.806555  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:29.018838  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:29.034894  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:29.034963  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:29.086433  360776 cri.go:89] found id: ""
	I0229 02:16:29.086460  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.086472  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:29.086481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:29.086562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:29.134575  360776 cri.go:89] found id: ""
	I0229 02:16:29.134606  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.134619  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:29.134627  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:29.134701  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:29.186372  360776 cri.go:89] found id: ""
	I0229 02:16:29.186408  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.186420  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:29.186427  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:29.186481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:29.236276  360776 cri.go:89] found id: ""
	I0229 02:16:29.236299  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.236306  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:29.236312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:29.236361  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:29.280342  360776 cri.go:89] found id: ""
	I0229 02:16:29.280371  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.280380  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:29.280389  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:29.280461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:29.325017  360776 cri.go:89] found id: ""
	I0229 02:16:29.325047  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.325059  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:29.325068  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:29.325139  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:29.367912  360776 cri.go:89] found id: ""
	I0229 02:16:29.367941  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.367951  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:29.367957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:29.368021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:29.404499  360776 cri.go:89] found id: ""
	I0229 02:16:29.404528  360776 logs.go:276] 0 containers: []
	W0229 02:16:29.404538  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:29.404548  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:29.404562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:29.419724  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:29.419755  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:29.501923  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:29.501952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:29.501971  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:29.536724  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:29.536762  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:29.579709  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:29.579744  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.129825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:32.147723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:32.147815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:32.206978  360776 cri.go:89] found id: ""
	I0229 02:16:32.207016  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.207030  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:32.207041  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:32.207140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:32.265296  360776 cri.go:89] found id: ""
	I0229 02:16:32.265328  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.265341  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:32.265350  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:32.265418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:32.312827  360776 cri.go:89] found id: ""
	I0229 02:16:32.312862  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.312874  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:32.312882  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:32.312946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:29.736263  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.238217  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:31.277342  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:33.279528  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:30.806969  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.808795  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:32.359988  360776 cri.go:89] found id: ""
	I0229 02:16:32.360024  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.360036  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:32.360045  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:32.360106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:32.400969  360776 cri.go:89] found id: ""
	I0229 02:16:32.401003  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.401015  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:32.401022  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:32.401075  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:32.437371  360776 cri.go:89] found id: ""
	I0229 02:16:32.437402  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.437411  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:32.437419  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:32.437491  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:32.481199  360776 cri.go:89] found id: ""
	I0229 02:16:32.481227  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.481238  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:32.481247  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:32.481329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:32.528100  360776 cri.go:89] found id: ""
	I0229 02:16:32.528137  360776 logs.go:276] 0 containers: []
	W0229 02:16:32.528150  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:32.528163  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:32.528180  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:32.565087  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:32.565122  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:32.616350  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:32.616382  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:32.669978  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:32.670015  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:32.684373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:32.684399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:32.769992  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:35.270148  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:35.289949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:35.290050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:35.334051  360776 cri.go:89] found id: ""
	I0229 02:16:35.334091  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.334103  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:35.334112  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:35.334170  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:35.378536  360776 cri.go:89] found id: ""
	I0229 02:16:35.378571  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.378585  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:35.378594  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:35.378660  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:35.417867  360776 cri.go:89] found id: ""
	I0229 02:16:35.417894  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.417905  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:35.417914  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:35.417982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:35.455848  360776 cri.go:89] found id: ""
	I0229 02:16:35.455874  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.455887  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:35.455896  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:35.455964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:35.494787  360776 cri.go:89] found id: ""
	I0229 02:16:35.494814  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.494822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:35.494828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:35.494890  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:35.533553  360776 cri.go:89] found id: ""
	I0229 02:16:35.533583  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.533592  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:35.533600  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:35.533669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:35.581381  360776 cri.go:89] found id: ""
	I0229 02:16:35.581412  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.581422  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:35.581429  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:35.581494  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:35.619128  360776 cri.go:89] found id: ""
	I0229 02:16:35.619158  360776 logs.go:276] 0 containers: []
	W0229 02:16:35.619169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:35.619181  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:35.619197  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:35.655180  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:35.655216  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:35.701558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:35.701585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:35.753639  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:35.753672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:35.769711  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:35.769743  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:35.843861  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:34.735895  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:36.736525  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:35.280004  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:37.778345  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:35.308212  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:37.807970  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:38.345063  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:38.361259  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:38.361345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:38.405901  360776 cri.go:89] found id: ""
	I0229 02:16:38.405936  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.405949  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:38.405958  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:38.406027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:38.447860  360776 cri.go:89] found id: ""
	I0229 02:16:38.447894  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.447907  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:38.447915  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:38.447983  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:38.489711  360776 cri.go:89] found id: ""
	I0229 02:16:38.489737  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.489746  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:38.489752  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:38.489815  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:38.527094  360776 cri.go:89] found id: ""
	I0229 02:16:38.527120  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.527128  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:38.527135  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:38.527202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:38.564125  360776 cri.go:89] found id: ""
	I0229 02:16:38.564165  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.564175  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:38.564183  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:38.564257  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:38.604355  360776 cri.go:89] found id: ""
	I0229 02:16:38.604385  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.604394  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:38.604401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:38.604471  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:38.642291  360776 cri.go:89] found id: ""
	I0229 02:16:38.642329  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.642338  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:38.642345  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:38.642425  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:38.684559  360776 cri.go:89] found id: ""
	I0229 02:16:38.684605  360776 logs.go:276] 0 containers: []
	W0229 02:16:38.684617  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:38.684632  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:38.684646  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:38.735189  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:38.735230  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:38.750359  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:38.750388  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:38.832749  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:38.832777  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:38.832793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:38.871321  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:38.871355  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.429960  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:41.445002  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:41.445081  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:41.487833  360776 cri.go:89] found id: ""
	I0229 02:16:41.487867  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.487880  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:41.487889  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:41.487953  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:41.527667  360776 cri.go:89] found id: ""
	I0229 02:16:41.527691  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.527700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:41.527706  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:41.527767  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:41.568252  360776 cri.go:89] found id: ""
	I0229 02:16:41.568279  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.568289  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:41.568295  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:41.568347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:41.606664  360776 cri.go:89] found id: ""
	I0229 02:16:41.606697  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.606709  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:41.606717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:41.606787  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:41.643384  360776 cri.go:89] found id: ""
	I0229 02:16:41.643413  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.643425  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:41.643433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:41.643488  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:41.685132  360776 cri.go:89] found id: ""
	I0229 02:16:41.685165  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.685179  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:41.685188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:41.685255  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:41.725844  360776 cri.go:89] found id: ""
	I0229 02:16:41.725874  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.725888  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:41.725901  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:41.725959  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:41.764651  360776 cri.go:89] found id: ""
	I0229 02:16:41.764684  360776 logs.go:276] 0 containers: []
	W0229 02:16:41.764710  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:41.764728  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:41.764745  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:41.846499  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:41.846520  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:41.846534  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:41.889415  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:41.889454  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:41.955514  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:41.955554  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:42.011187  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:42.011231  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:38.736997  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:40.737109  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:39.778387  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:41.780284  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:40.308479  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:42.807142  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.808770  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.528746  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:44.544657  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:44.544735  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:44.584593  360776 cri.go:89] found id: ""
	I0229 02:16:44.584619  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.584628  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:44.584634  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:44.584703  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:44.621819  360776 cri.go:89] found id: ""
	I0229 02:16:44.621851  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.621863  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:44.621870  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:44.621936  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:44.661908  360776 cri.go:89] found id: ""
	I0229 02:16:44.661939  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.661951  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:44.661959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:44.662042  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:44.703135  360776 cri.go:89] found id: ""
	I0229 02:16:44.703168  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.703179  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:44.703186  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:44.703256  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:44.742783  360776 cri.go:89] found id: ""
	I0229 02:16:44.742812  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.742823  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:44.742831  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:44.742900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:44.786223  360776 cri.go:89] found id: ""
	I0229 02:16:44.786258  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.786271  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:44.786280  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:44.786348  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:44.832269  360776 cri.go:89] found id: ""
	I0229 02:16:44.832295  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.832304  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:44.832312  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:44.832371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:44.882497  360776 cri.go:89] found id: ""
	I0229 02:16:44.882529  360776 logs.go:276] 0 containers: []
	W0229 02:16:44.882541  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:44.882554  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:44.882572  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:44.898452  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:44.898484  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:44.988062  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:44.988089  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:44.988106  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:45.025317  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:45.025353  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:45.069804  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:45.069843  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:43.236422  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:45.236874  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:47.238514  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:44.277544  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:46.279502  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:48.280224  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:46.809509  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:49.307555  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:47.621890  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:47.636506  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:47.636572  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:47.679975  360776 cri.go:89] found id: ""
	I0229 02:16:47.680007  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.680019  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:47.680026  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:47.680099  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:47.720573  360776 cri.go:89] found id: ""
	I0229 02:16:47.720604  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.720616  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:47.720628  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:47.720693  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:47.762211  360776 cri.go:89] found id: ""
	I0229 02:16:47.762239  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.762256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:47.762264  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:47.762325  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:47.801703  360776 cri.go:89] found id: ""
	I0229 02:16:47.801726  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.801736  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:47.801745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:47.801804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:47.843036  360776 cri.go:89] found id: ""
	I0229 02:16:47.843065  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.843074  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:47.843087  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:47.843137  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:47.901986  360776 cri.go:89] found id: ""
	I0229 02:16:47.902016  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.902029  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:47.902037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:47.902115  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:47.970578  360776 cri.go:89] found id: ""
	I0229 02:16:47.970626  360776 logs.go:276] 0 containers: []
	W0229 02:16:47.970638  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:47.970646  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:47.970727  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:48.008245  360776 cri.go:89] found id: ""
	I0229 02:16:48.008280  360776 logs.go:276] 0 containers: []
	W0229 02:16:48.008290  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:48.008303  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:48.008318  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:48.059243  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:48.059277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:48.109287  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:48.109328  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:48.124720  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:48.124747  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:48.201686  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:48.201734  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:48.201750  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:50.740237  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:50.755100  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:50.755174  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:50.799284  360776 cri.go:89] found id: ""
	I0229 02:16:50.799304  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.799312  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:50.799318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:50.799367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:50.863582  360776 cri.go:89] found id: ""
	I0229 02:16:50.863617  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.863630  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:50.863638  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:50.863709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:50.913067  360776 cri.go:89] found id: ""
	I0229 02:16:50.913097  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.913107  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:50.913114  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:50.913181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:50.964343  360776 cri.go:89] found id: ""
	I0229 02:16:50.964372  360776 logs.go:276] 0 containers: []
	W0229 02:16:50.964381  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:50.964387  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:50.964443  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:51.008180  360776 cri.go:89] found id: ""
	I0229 02:16:51.008215  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.008226  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:51.008234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:51.008314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:51.050574  360776 cri.go:89] found id: ""
	I0229 02:16:51.050604  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.050613  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:51.050619  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:51.050682  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:51.094144  360776 cri.go:89] found id: ""
	I0229 02:16:51.094170  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.094180  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:51.094187  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:51.094254  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:51.133928  360776 cri.go:89] found id: ""
	I0229 02:16:51.133963  360776 logs.go:276] 0 containers: []
	W0229 02:16:51.133976  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:51.133989  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:51.134005  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:51.169857  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:51.169888  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:51.211739  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:51.211774  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:51.267237  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:51.267277  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:51.285167  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:51.285200  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:51.361051  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:49.736852  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:52.235969  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:50.781150  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.277926  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:51.307606  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.308568  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:53.861859  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:53.879047  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:53.879124  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:53.931722  360776 cri.go:89] found id: ""
	I0229 02:16:53.931751  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.931761  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:53.931770  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:53.931843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:53.989223  360776 cri.go:89] found id: ""
	I0229 02:16:53.989250  360776 logs.go:276] 0 containers: []
	W0229 02:16:53.989259  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:53.989266  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:53.989316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:54.029340  360776 cri.go:89] found id: ""
	I0229 02:16:54.029367  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.029379  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:54.029394  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:54.029455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:54.065032  360776 cri.go:89] found id: ""
	I0229 02:16:54.065061  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.065072  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:54.065081  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:54.065148  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:54.103739  360776 cri.go:89] found id: ""
	I0229 02:16:54.103771  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.103783  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:54.103791  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:54.103886  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:54.146653  360776 cri.go:89] found id: ""
	I0229 02:16:54.146706  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.146720  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:54.146728  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:54.146804  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:54.183885  360776 cri.go:89] found id: ""
	I0229 02:16:54.183909  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.183917  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:54.183923  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:54.183985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:54.223712  360776 cri.go:89] found id: ""
	I0229 02:16:54.223739  360776 logs.go:276] 0 containers: []
	W0229 02:16:54.223748  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:54.223758  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:54.223776  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:54.239418  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:54.239443  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:54.316236  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:54.316262  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:54.316278  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:54.351899  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:54.351933  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:54.396954  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:54.396990  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:56.949058  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:56.965888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:16:56.965966  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:16:57.010067  360776 cri.go:89] found id: ""
	I0229 02:16:57.010114  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.010127  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:16:57.010136  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:16:57.010199  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:16:57.048082  360776 cri.go:89] found id: ""
	I0229 02:16:57.048108  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.048116  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:16:57.048123  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:16:57.048172  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:16:57.082859  360776 cri.go:89] found id: ""
	I0229 02:16:57.082890  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.082903  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:16:57.082910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:16:57.082971  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:16:57.118291  360776 cri.go:89] found id: ""
	I0229 02:16:57.118321  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.118331  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:16:57.118338  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:16:57.118396  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:16:57.155920  360776 cri.go:89] found id: ""
	I0229 02:16:57.155945  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.155954  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:16:57.155960  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:16:57.156007  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:16:57.198460  360776 cri.go:89] found id: ""
	I0229 02:16:57.198494  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.198503  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:16:57.198515  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:16:57.198576  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:16:57.239178  360776 cri.go:89] found id: ""
	I0229 02:16:57.239206  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.239214  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:16:57.239220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:16:57.239267  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:16:57.280933  360776 cri.go:89] found id: ""
	I0229 02:16:57.280964  360776 logs.go:276] 0 containers: []
	W0229 02:16:57.280977  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:16:57.280988  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:16:57.281004  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:16:57.341023  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:16:57.341056  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:16:54.237542  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:56.736019  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:55.778328  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:58.281018  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:55.309863  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:57.311910  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:59.807723  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:57.356053  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:16:57.356083  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:16:57.435017  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:16:57.435040  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:16:57.435057  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:16:57.472428  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:16:57.472461  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:00.020707  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:00.035406  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:00.035476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:00.072190  360776 cri.go:89] found id: ""
	I0229 02:17:00.072222  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.072231  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:00.072237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:00.072289  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:00.108829  360776 cri.go:89] found id: ""
	I0229 02:17:00.108857  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.108868  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:00.108875  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:00.108927  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:00.143429  360776 cri.go:89] found id: ""
	I0229 02:17:00.143450  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.143459  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:00.143465  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:00.143512  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:00.180428  360776 cri.go:89] found id: ""
	I0229 02:17:00.180456  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.180467  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:00.180496  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:00.180564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:00.220115  360776 cri.go:89] found id: ""
	I0229 02:17:00.220143  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.220155  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:00.220163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:00.220220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:00.258851  360776 cri.go:89] found id: ""
	I0229 02:17:00.258877  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.258887  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:00.258895  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:00.258982  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:00.304148  360776 cri.go:89] found id: ""
	I0229 02:17:00.304174  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.304185  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:00.304193  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:00.304277  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:00.345893  360776 cri.go:89] found id: ""
	I0229 02:17:00.345923  360776 logs.go:276] 0 containers: []
	W0229 02:17:00.345935  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:00.345950  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:00.345965  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:00.395977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:00.396006  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:00.410948  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:00.410970  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:00.485724  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:00.485745  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:00.485760  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:00.520496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:00.520531  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:16:59.236302  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:01.237806  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:00.777736  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.280794  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:01.807808  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.818535  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:03.065669  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:03.081434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:03.081496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:03.118752  360776 cri.go:89] found id: ""
	I0229 02:17:03.118779  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.118788  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:03.118794  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:03.118870  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:03.156172  360776 cri.go:89] found id: ""
	I0229 02:17:03.156197  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.156209  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:03.156216  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:03.156285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:03.190792  360776 cri.go:89] found id: ""
	I0229 02:17:03.190815  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.190823  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:03.190829  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:03.190885  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:03.229692  360776 cri.go:89] found id: ""
	I0229 02:17:03.229721  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.229733  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:03.229741  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:03.229800  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:03.271014  360776 cri.go:89] found id: ""
	I0229 02:17:03.271044  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.271053  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:03.271058  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:03.271118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:03.315291  360776 cri.go:89] found id: ""
	I0229 02:17:03.315316  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.315325  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:03.315332  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:03.315390  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:03.354974  360776 cri.go:89] found id: ""
	I0229 02:17:03.354998  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.355007  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:03.355014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:03.355091  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:03.394044  360776 cri.go:89] found id: ""
	I0229 02:17:03.394074  360776 logs.go:276] 0 containers: []
	W0229 02:17:03.394101  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:03.394120  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:03.394138  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:03.430131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:03.430164  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.472760  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:03.472793  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:03.522797  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:03.522837  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:03.538642  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:03.538672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:03.611189  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.112319  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:06.126843  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:06.126924  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:06.171970  360776 cri.go:89] found id: ""
	I0229 02:17:06.171995  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.172005  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:06.172011  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:06.172060  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:06.208082  360776 cri.go:89] found id: ""
	I0229 02:17:06.208114  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.208126  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:06.208133  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:06.208211  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:06.246429  360776 cri.go:89] found id: ""
	I0229 02:17:06.246454  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.246465  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:06.246472  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:06.246521  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:06.286908  360776 cri.go:89] found id: ""
	I0229 02:17:06.286941  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.286952  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:06.286959  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:06.287036  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:06.330632  360776 cri.go:89] found id: ""
	I0229 02:17:06.330664  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.330707  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:06.330720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:06.330793  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:06.368385  360776 cri.go:89] found id: ""
	I0229 02:17:06.368412  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.368423  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:06.368431  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:06.368499  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:06.407424  360776 cri.go:89] found id: ""
	I0229 02:17:06.407456  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.407468  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:06.407476  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:06.407542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:06.447043  360776 cri.go:89] found id: ""
	I0229 02:17:06.447072  360776 logs.go:276] 0 containers: []
	W0229 02:17:06.447084  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:06.447098  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:06.447119  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:06.501604  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:06.501639  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:06.516247  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:06.516274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:06.593087  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:06.593112  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:06.593126  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:06.633057  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:06.633097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:03.735552  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:05.735757  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:07.736746  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:05.777670  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:07.779116  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:06.308986  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:08.808349  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:09.202624  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:09.218424  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:09.218496  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:09.264508  360776 cri.go:89] found id: ""
	I0229 02:17:09.264538  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.264551  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:09.264560  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:09.264652  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:09.304507  360776 cri.go:89] found id: ""
	I0229 02:17:09.304536  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.304547  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:09.304555  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:09.304619  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:09.354779  360776 cri.go:89] found id: ""
	I0229 02:17:09.354802  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.354811  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:09.354817  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:09.354866  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:09.390031  360776 cri.go:89] found id: ""
	I0229 02:17:09.390065  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.390097  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:09.390106  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:09.390182  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:09.435618  360776 cri.go:89] found id: ""
	I0229 02:17:09.435652  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.435666  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:09.435674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:09.435757  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:09.479110  360776 cri.go:89] found id: ""
	I0229 02:17:09.479142  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.479154  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:09.479163  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:09.479236  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:09.520748  360776 cri.go:89] found id: ""
	I0229 02:17:09.520781  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.520794  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:09.520802  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:09.520879  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:09.561536  360776 cri.go:89] found id: ""
	I0229 02:17:09.561576  360776 logs.go:276] 0 containers: []
	W0229 02:17:09.561590  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:09.561611  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:09.561628  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:09.621631  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:09.621678  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:09.640562  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:09.640607  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:09.727979  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:09.728001  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:09.728013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:09.766305  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:09.766340  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:12.312841  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:12.329745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:12.329826  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:10.236840  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.736224  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:09.779304  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.277545  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:11.308061  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:13.808929  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:12.376185  360776 cri.go:89] found id: ""
	I0229 02:17:12.376218  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.376230  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:12.376240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:12.376317  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:12.417025  360776 cri.go:89] found id: ""
	I0229 02:17:12.417059  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.417068  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:12.417080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:12.417162  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:12.458973  360776 cri.go:89] found id: ""
	I0229 02:17:12.459018  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.459040  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:12.459048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:12.459116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:12.500063  360776 cri.go:89] found id: ""
	I0229 02:17:12.500090  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.500102  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:12.500110  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:12.500177  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:12.543182  360776 cri.go:89] found id: ""
	I0229 02:17:12.543213  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.543225  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:12.543234  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:12.543296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:12.584725  360776 cri.go:89] found id: ""
	I0229 02:17:12.584773  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.584796  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:12.584804  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:12.584873  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:12.634212  360776 cri.go:89] found id: ""
	I0229 02:17:12.634244  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.634256  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:12.634263  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:12.634330  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:12.686103  360776 cri.go:89] found id: ""
	I0229 02:17:12.686134  360776 logs.go:276] 0 containers: []
	W0229 02:17:12.686144  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:12.686154  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:12.686168  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:12.753950  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:12.753999  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:12.769400  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:12.769430  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:12.856362  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:12.856390  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:12.856408  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:12.893238  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:12.893274  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:15.439069  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:15.455698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:15.455779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:15.501222  360776 cri.go:89] found id: ""
	I0229 02:17:15.501248  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.501262  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:15.501269  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:15.501331  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:15.544580  360776 cri.go:89] found id: ""
	I0229 02:17:15.544610  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.544623  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:15.544632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:15.544697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:15.587250  360776 cri.go:89] found id: ""
	I0229 02:17:15.587301  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.587314  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:15.587322  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:15.587392  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:15.660189  360776 cri.go:89] found id: ""
	I0229 02:17:15.660214  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.660223  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:15.660229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:15.660280  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:15.715100  360776 cri.go:89] found id: ""
	I0229 02:17:15.715126  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.715136  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:15.715142  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:15.715203  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:15.758998  360776 cri.go:89] found id: ""
	I0229 02:17:15.759028  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.759047  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:15.759053  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:15.759118  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:15.801175  360776 cri.go:89] found id: ""
	I0229 02:17:15.801203  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.801215  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:15.801224  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:15.801294  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:15.849643  360776 cri.go:89] found id: ""
	I0229 02:17:15.849678  360776 logs.go:276] 0 containers: []
	W0229 02:17:15.849690  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:15.849704  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:15.849724  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:15.864824  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:15.864856  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:15.937271  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:15.937299  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:15.937313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:15.976404  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:15.976448  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:16.025658  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:16.025697  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:15.235863  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:17.237685  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:14.279268  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:16.280226  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.779746  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:16.307548  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.806653  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:18.574763  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:18.593695  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:18.593802  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:18.641001  360776 cri.go:89] found id: ""
	I0229 02:17:18.641033  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.641042  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:18.641048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:18.641106  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:18.701580  360776 cri.go:89] found id: ""
	I0229 02:17:18.701608  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.701617  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:18.701623  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:18.701674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:18.742596  360776 cri.go:89] found id: ""
	I0229 02:17:18.742632  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.742642  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:18.742649  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:18.742712  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:18.782404  360776 cri.go:89] found id: ""
	I0229 02:17:18.782432  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.782443  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:18.782451  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:18.782516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:18.826221  360776 cri.go:89] found id: ""
	I0229 02:17:18.826250  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.826262  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:18.826270  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:18.826354  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:18.864698  360776 cri.go:89] found id: ""
	I0229 02:17:18.864737  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.864746  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:18.864766  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:18.864819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:18.902681  360776 cri.go:89] found id: ""
	I0229 02:17:18.902708  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.902718  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:18.902723  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:18.902835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:18.942178  360776 cri.go:89] found id: ""
	I0229 02:17:18.942203  360776 logs.go:276] 0 containers: []
	W0229 02:17:18.942213  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:18.942223  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:18.942236  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:18.983914  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:18.983947  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:19.041670  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:19.041710  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:19.057445  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:19.057475  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:19.128946  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:19.128974  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:19.129007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:21.664806  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:21.680938  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:21.681037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:21.737776  360776 cri.go:89] found id: ""
	I0229 02:17:21.737808  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.737825  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:21.737833  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:21.737913  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:21.778917  360776 cri.go:89] found id: ""
	I0229 02:17:21.778951  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.778962  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:21.778969  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:21.779033  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:21.819099  360776 cri.go:89] found id: ""
	I0229 02:17:21.819127  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.819139  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:21.819147  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:21.819230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:21.861290  360776 cri.go:89] found id: ""
	I0229 02:17:21.861323  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.861334  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:21.861342  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:21.861406  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:21.900886  360776 cri.go:89] found id: ""
	I0229 02:17:21.900926  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.900938  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:21.900946  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:21.901021  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:21.943023  360776 cri.go:89] found id: ""
	I0229 02:17:21.943060  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.943072  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:21.943080  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:21.943145  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:21.984305  360776 cri.go:89] found id: ""
	I0229 02:17:21.984341  360776 logs.go:276] 0 containers: []
	W0229 02:17:21.984352  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:21.984360  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:21.984428  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:22.025326  360776 cri.go:89] found id: ""
	I0229 02:17:22.025356  360776 logs.go:276] 0 containers: []
	W0229 02:17:22.025368  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:22.025382  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:22.025398  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:22.074977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:22.075020  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:22.092483  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:22.092518  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:22.171791  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:22.171814  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:22.171833  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:22.211794  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:22.211850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:19.736684  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:21.737510  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:21.278089  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:23.278374  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:20.808574  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:23.307697  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:24.758800  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:24.773418  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:24.773501  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:24.819487  360776 cri.go:89] found id: ""
	I0229 02:17:24.819520  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.819531  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:24.819540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:24.819605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:24.859906  360776 cri.go:89] found id: ""
	I0229 02:17:24.859938  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.859949  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:24.859957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:24.860022  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:24.897499  360776 cri.go:89] found id: ""
	I0229 02:17:24.897531  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.897540  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:24.897547  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:24.897622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:24.935346  360776 cri.go:89] found id: ""
	I0229 02:17:24.935380  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.935393  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:24.935401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:24.935468  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:24.973567  360776 cri.go:89] found id: ""
	I0229 02:17:24.973591  360776 logs.go:276] 0 containers: []
	W0229 02:17:24.973600  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:24.973605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:24.973657  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:25.016166  360776 cri.go:89] found id: ""
	I0229 02:17:25.016198  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.016210  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:25.016217  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:25.016285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:25.059944  360776 cri.go:89] found id: ""
	I0229 02:17:25.059977  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.059991  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:25.059999  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:25.060057  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:25.101594  360776 cri.go:89] found id: ""
	I0229 02:17:25.101627  360776 logs.go:276] 0 containers: []
	W0229 02:17:25.101639  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:25.101652  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:25.101672  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:25.183940  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:25.183988  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:25.184007  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:25.219286  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:25.219327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:25.267048  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:25.267107  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:25.320969  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:25.320998  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:24.236957  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:26.736244  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:25.278532  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.777655  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:25.308061  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.806994  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:27.846314  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:27.861349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:27.861416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:27.901126  360776 cri.go:89] found id: ""
	I0229 02:17:27.901153  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.901162  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:27.901169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:27.901220  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:27.942692  360776 cri.go:89] found id: ""
	I0229 02:17:27.942725  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.942738  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:27.942745  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:27.942803  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:27.978891  360776 cri.go:89] found id: ""
	I0229 02:17:27.978919  360776 logs.go:276] 0 containers: []
	W0229 02:17:27.978928  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:27.978934  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:27.978991  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:28.019688  360776 cri.go:89] found id: ""
	I0229 02:17:28.019723  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.019735  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:28.019743  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:28.019799  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:28.056414  360776 cri.go:89] found id: ""
	I0229 02:17:28.056438  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.056451  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:28.056457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:28.056504  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:28.093691  360776 cri.go:89] found id: ""
	I0229 02:17:28.093727  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.093739  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:28.093747  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:28.093806  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:28.130737  360776 cri.go:89] found id: ""
	I0229 02:17:28.130761  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.130768  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:28.130774  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:28.130828  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:28.167783  360776 cri.go:89] found id: ""
	I0229 02:17:28.167810  360776 logs.go:276] 0 containers: []
	W0229 02:17:28.167820  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:28.167832  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:28.167850  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:28.248054  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:28.248080  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:28.248096  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:28.284935  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:28.284963  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:28.328563  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:28.328605  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:28.379372  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:28.379412  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:30.896570  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:30.912070  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:30.912140  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:30.951633  360776 cri.go:89] found id: ""
	I0229 02:17:30.951662  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.951674  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:30.951681  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:30.951725  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:30.988094  360776 cri.go:89] found id: ""
	I0229 02:17:30.988121  360776 logs.go:276] 0 containers: []
	W0229 02:17:30.988133  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:30.988141  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:30.988197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:31.025379  360776 cri.go:89] found id: ""
	I0229 02:17:31.025405  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.025416  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:31.025423  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:31.025476  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:31.064070  360776 cri.go:89] found id: ""
	I0229 02:17:31.064100  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.064112  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:31.064120  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:31.064178  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:31.106455  360776 cri.go:89] found id: ""
	I0229 02:17:31.106487  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.106498  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:31.106505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:31.106564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:31.141789  360776 cri.go:89] found id: ""
	I0229 02:17:31.141819  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.141830  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:31.141838  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:31.141985  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:31.181781  360776 cri.go:89] found id: ""
	I0229 02:17:31.181807  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.181815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:31.181820  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:31.181877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:31.222653  360776 cri.go:89] found id: ""
	I0229 02:17:31.222687  360776 logs.go:276] 0 containers: []
	W0229 02:17:31.222700  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:31.222713  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:31.222730  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:31.272067  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:31.272100  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:31.287890  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:31.287917  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:31.370516  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:31.370545  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:31.370559  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:31.416216  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:31.416257  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:29.235795  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.237540  360079 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.729967  360079 pod_ready.go:81] duration metric: took 4m0.001042569s waiting for pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace to be "Ready" ...
	E0229 02:17:31.729999  360079 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5lfgm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:17:31.730022  360079 pod_ready.go:38] duration metric: took 4m13.043743347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:31.730062  360079 kubeadm.go:640] restartCluster took 4m31.356459787s
	W0229 02:17:31.730347  360079 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:17:31.730404  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:17:29.777918  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:31.778158  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:30.307297  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:32.307846  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:34.309842  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:33.976724  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:33.991119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:33.991202  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:34.038632  360776 cri.go:89] found id: ""
	I0229 02:17:34.038659  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.038668  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:34.038674  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:34.038744  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:34.076069  360776 cri.go:89] found id: ""
	I0229 02:17:34.076109  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.076120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:34.076128  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:34.076212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:34.122220  360776 cri.go:89] found id: ""
	I0229 02:17:34.122246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.122256  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:34.122265  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:34.122329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:34.163216  360776 cri.go:89] found id: ""
	I0229 02:17:34.163246  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.163259  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:34.163268  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:34.163337  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:34.206631  360776 cri.go:89] found id: ""
	I0229 02:17:34.206679  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.206691  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:34.206698  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:34.206766  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:34.250992  360776 cri.go:89] found id: ""
	I0229 02:17:34.251024  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.251037  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:34.251048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:34.251116  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:34.289582  360776 cri.go:89] found id: ""
	I0229 02:17:34.289609  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.289620  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:34.289626  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:34.289690  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:34.335130  360776 cri.go:89] found id: ""
	I0229 02:17:34.335158  360776 logs.go:276] 0 containers: []
	W0229 02:17:34.335169  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:34.335182  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:34.335198  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:34.365870  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:34.365920  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:34.462536  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:34.462567  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:34.462585  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:34.500235  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:34.500281  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:34.551106  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:34.551146  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.104547  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:37.123303  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:37.123367  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:37.164350  360776 cri.go:89] found id: ""
	I0229 02:17:37.164378  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.164391  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:37.164401  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:37.164466  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:37.209965  360776 cri.go:89] found id: ""
	I0229 02:17:37.210000  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.210014  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:37.210023  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:37.210125  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:37.253162  360776 cri.go:89] found id: ""
	I0229 02:17:37.253192  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.253205  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:37.253213  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:37.253293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:37.300836  360776 cri.go:89] found id: ""
	I0229 02:17:37.300862  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.300872  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:37.300880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:37.300944  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:37.343546  360776 cri.go:89] found id: ""
	I0229 02:17:37.343573  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.343585  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:37.343598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:37.343669  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:37.044032  360079 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.313599592s)
	I0229 02:17:37.044103  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:37.062591  360079 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:17:37.074885  360079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:17:37.086583  360079 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:17:37.086639  360079 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:17:37.155776  360079 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 02:17:37.155861  360079 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:17:37.340395  360079 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:17:37.340526  360079 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:17:37.340643  360079 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:17:37.578733  360079 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:17:37.580576  360079 out.go:204]   - Generating certificates and keys ...
	I0229 02:17:37.580753  360079 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:17:37.580872  360079 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:17:37.580986  360079 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:17:37.581082  360079 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:17:37.581187  360079 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:17:37.581416  360079 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:17:37.581969  360079 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:17:37.582241  360079 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:17:37.582871  360079 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:17:37.583233  360079 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:17:37.583541  360079 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:17:37.583596  360079 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:17:37.843311  360079 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:17:37.914504  360079 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 02:17:38.039892  360079 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:17:38.271953  360079 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:17:38.514979  360079 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:17:38.515587  360079 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:17:38.518101  360079 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:17:34.279682  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:36.283111  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:38.780078  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:36.807145  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:39.305997  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:37.407526  360776 cri.go:89] found id: ""
	I0229 02:17:37.407554  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.407567  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:37.407574  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:37.407642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:37.486848  360776 cri.go:89] found id: ""
	I0229 02:17:37.486890  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.486902  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:37.486910  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:37.486978  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:37.529152  360776 cri.go:89] found id: ""
	I0229 02:17:37.529187  360776 logs.go:276] 0 containers: []
	W0229 02:17:37.529199  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:37.529221  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:37.529238  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:37.594611  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:37.594642  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:37.612946  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:37.612980  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:37.697527  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:37.697552  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:37.697568  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:37.737130  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:37.737165  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.285260  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:40.302884  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:40.302962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:40.346431  360776 cri.go:89] found id: ""
	I0229 02:17:40.346463  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.346474  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:40.346481  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:40.346547  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:40.403100  360776 cri.go:89] found id: ""
	I0229 02:17:40.403132  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.403147  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:40.403154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:40.403223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:40.466390  360776 cri.go:89] found id: ""
	I0229 02:17:40.466424  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.466435  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:40.466444  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:40.466516  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:40.509811  360776 cri.go:89] found id: ""
	I0229 02:17:40.509840  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.509851  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:40.509859  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:40.509918  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:40.546249  360776 cri.go:89] found id: ""
	I0229 02:17:40.546281  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.546294  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:40.546302  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:40.546366  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:40.584490  360776 cri.go:89] found id: ""
	I0229 02:17:40.584520  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.584532  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:40.584540  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:40.584602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:40.628397  360776 cri.go:89] found id: ""
	I0229 02:17:40.628427  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.628439  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:40.628447  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:40.628508  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:40.675557  360776 cri.go:89] found id: ""
	I0229 02:17:40.675584  360776 logs.go:276] 0 containers: []
	W0229 02:17:40.675593  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:40.675603  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:40.675616  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:40.762140  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:40.762167  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:40.762192  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:40.808405  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:40.808444  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:40.860511  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:40.860553  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:40.929977  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:40.930013  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:38.519654  360079 out.go:204]   - Booting up control plane ...
	I0229 02:17:38.519770  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:17:38.520351  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:17:38.523272  360079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:17:38.545603  360079 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:17:38.547015  360079 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:17:38.547133  360079 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:17:38.713788  360079 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:17:40.780376  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:43.278958  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:41.308561  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:43.308710  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:44.718240  360079 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003956 seconds
	I0229 02:17:44.736859  360079 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:17:44.755878  360079 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:17:45.285373  360079 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:17:45.285648  360079 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-907398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:17:45.797261  360079 kubeadm.go:322] [bootstrap-token] Using token: 32tkap.hl2tmrs81t324g78
	I0229 02:17:45.798858  360079 out.go:204]   - Configuring RBAC rules ...
	I0229 02:17:45.798996  360079 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:17:45.805734  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:17:45.814737  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:17:45.818516  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:17:45.823668  360079 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:17:45.827430  360079 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:17:45.842656  360079 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:17:46.096543  360079 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:17:46.292966  360079 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:17:46.293952  360079 kubeadm.go:322] 
	I0229 02:17:46.294055  360079 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:17:46.294075  360079 kubeadm.go:322] 
	I0229 02:17:46.294188  360079 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:17:46.294199  360079 kubeadm.go:322] 
	I0229 02:17:46.294231  360079 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:17:46.294314  360079 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:17:46.294432  360079 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:17:46.294454  360079 kubeadm.go:322] 
	I0229 02:17:46.294528  360079 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:17:46.294547  360079 kubeadm.go:322] 
	I0229 02:17:46.294635  360079 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:17:46.294657  360079 kubeadm.go:322] 
	I0229 02:17:46.294720  360079 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:17:46.294864  360079 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:17:46.294948  360079 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:17:46.294959  360079 kubeadm.go:322] 
	I0229 02:17:46.295078  360079 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:17:46.295174  360079 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:17:46.295185  360079 kubeadm.go:322] 
	I0229 02:17:46.295297  360079 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 32tkap.hl2tmrs81t324g78 \
	I0229 02:17:46.295404  360079 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:17:46.295441  360079 kubeadm.go:322] 	--control-plane 
	I0229 02:17:46.295448  360079 kubeadm.go:322] 
	I0229 02:17:46.295583  360079 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:17:46.295605  360079 kubeadm.go:322] 
	I0229 02:17:46.295770  360079 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 32tkap.hl2tmrs81t324g78 \
	I0229 02:17:46.295933  360079 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:17:46.298233  360079 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:17:46.298273  360079 cni.go:84] Creating CNI manager for ""
	I0229 02:17:46.298290  360079 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:17:46.300109  360079 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:17:43.449607  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:43.466367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:43.466441  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:43.504826  360776 cri.go:89] found id: ""
	I0229 02:17:43.504861  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.504873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:43.504880  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:43.504946  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:43.548641  360776 cri.go:89] found id: ""
	I0229 02:17:43.548682  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.548693  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:43.548701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:43.548760  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:43.591044  360776 cri.go:89] found id: ""
	I0229 02:17:43.591075  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.591085  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:43.591092  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:43.591152  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:43.639237  360776 cri.go:89] found id: ""
	I0229 02:17:43.639261  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.639269  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:43.639275  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:43.639329  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:43.677231  360776 cri.go:89] found id: ""
	I0229 02:17:43.677264  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.677277  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:43.677285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:43.677359  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:43.721264  360776 cri.go:89] found id: ""
	I0229 02:17:43.721295  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.721306  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:43.721314  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:43.721379  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:43.757248  360776 cri.go:89] found id: ""
	I0229 02:17:43.757281  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.757293  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:43.757300  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:43.757365  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:43.802304  360776 cri.go:89] found id: ""
	I0229 02:17:43.802332  360776 logs.go:276] 0 containers: []
	W0229 02:17:43.802343  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:43.802359  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:43.802375  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:43.855921  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:43.855949  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:43.869586  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:43.869623  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:43.945526  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:43.945562  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:43.945579  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:43.987179  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:43.987215  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:46.537504  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:46.556578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:46.556653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:46.603983  360776 cri.go:89] found id: ""
	I0229 02:17:46.604012  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.604025  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:46.604037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:46.604107  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:46.657708  360776 cri.go:89] found id: ""
	I0229 02:17:46.657736  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.657747  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:46.657754  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:46.657820  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:46.708795  360776 cri.go:89] found id: ""
	I0229 02:17:46.708830  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.708843  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:46.708852  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:46.708920  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:46.758013  360776 cri.go:89] found id: ""
	I0229 02:17:46.758043  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.758056  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:46.758064  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:46.758157  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:46.813107  360776 cri.go:89] found id: ""
	I0229 02:17:46.813138  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.813149  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:46.813156  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:46.813219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:46.859040  360776 cri.go:89] found id: ""
	I0229 02:17:46.859070  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.859081  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:46.859089  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:46.859154  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:46.905302  360776 cri.go:89] found id: ""
	I0229 02:17:46.905334  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.905346  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:46.905354  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:46.905416  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:46.950465  360776 cri.go:89] found id: ""
	I0229 02:17:46.950491  360776 logs.go:276] 0 containers: []
	W0229 02:17:46.950502  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:46.950515  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:46.950530  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:47.035016  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:47.035044  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:47.035062  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:47.074108  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:47.074140  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:47.122149  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:47.122183  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:47.187233  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:47.187283  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:46.301876  360079 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:17:46.328857  360079 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:17:46.365095  360079 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:17:46.365210  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:46.365239  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=no-preload-907398 minikube.k8s.io/updated_at=2024_02_29T02_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:46.445475  360079 ops.go:34] apiserver oom_adj: -16
	I0229 02:17:46.712653  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:47.213595  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:47.713471  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:45.279713  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:47.778580  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:45.309019  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:47.808652  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:49.708451  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:49.727327  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:49.727383  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:49.775679  360776 cri.go:89] found id: ""
	I0229 02:17:49.775712  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.775723  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:49.775732  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:49.775795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:49.821348  360776 cri.go:89] found id: ""
	I0229 02:17:49.821378  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.821387  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:49.821393  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:49.821459  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:49.864148  360776 cri.go:89] found id: ""
	I0229 02:17:49.864173  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.864182  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:49.864188  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:49.864281  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:49.904720  360776 cri.go:89] found id: ""
	I0229 02:17:49.904747  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.904756  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:49.904768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:49.904835  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:49.941952  360776 cri.go:89] found id: ""
	I0229 02:17:49.941976  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.941985  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:49.941992  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:49.942050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:49.987518  360776 cri.go:89] found id: ""
	I0229 02:17:49.987549  360776 logs.go:276] 0 containers: []
	W0229 02:17:49.987559  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:49.987566  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:49.987642  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:50.030662  360776 cri.go:89] found id: ""
	I0229 02:17:50.030691  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.030700  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:50.030708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:50.030768  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:50.075564  360776 cri.go:89] found id: ""
	I0229 02:17:50.075594  360776 logs.go:276] 0 containers: []
	W0229 02:17:50.075605  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:50.075617  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:50.075634  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:50.144223  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:50.144261  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:50.190615  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:50.190649  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:50.209014  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:50.209041  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:50.291096  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:50.291121  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:50.291135  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:48.213151  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:48.713484  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.212735  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.713172  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:50.213286  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:50.712875  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:51.213491  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:51.713354  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:52.212811  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:52.712670  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:49.779580  360217 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:51.771065  360217 pod_ready.go:81] duration metric: took 4m0.00037351s waiting for pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace to be "Ready" ...
	E0229 02:17:51.771121  360217 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hxzvc" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:17:51.771147  360217 pod_ready.go:38] duration metric: took 4m14.54716064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:51.771185  360217 kubeadm.go:640] restartCluster took 4m31.62028036s
	W0229 02:17:51.771272  360217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:17:51.771309  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:17:50.307305  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:52.309458  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:54.310095  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:52.827936  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:52.844926  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:52.845027  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:52.892302  360776 cri.go:89] found id: ""
	I0229 02:17:52.892336  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.892349  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:52.892357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:52.892417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:52.943564  360776 cri.go:89] found id: ""
	I0229 02:17:52.943597  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.943607  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:52.943615  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:52.943683  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:52.990217  360776 cri.go:89] found id: ""
	I0229 02:17:52.990251  360776 logs.go:276] 0 containers: []
	W0229 02:17:52.990269  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:52.990278  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:52.990347  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:53.038508  360776 cri.go:89] found id: ""
	I0229 02:17:53.038542  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.038554  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:53.038562  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:53.038622  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:53.082156  360776 cri.go:89] found id: ""
	I0229 02:17:53.082184  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.082197  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:53.082205  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:53.082287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:53.149247  360776 cri.go:89] found id: ""
	I0229 02:17:53.149284  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.149295  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:53.149304  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:53.149371  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:53.201169  360776 cri.go:89] found id: ""
	I0229 02:17:53.201199  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.201211  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:53.201219  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:53.201286  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:53.268458  360776 cri.go:89] found id: ""
	I0229 02:17:53.268493  360776 logs.go:276] 0 containers: []
	W0229 02:17:53.268507  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:53.268521  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:53.268546  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:53.288661  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:53.288708  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:53.371251  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:53.371277  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:53.371295  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:53.415981  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:53.416033  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:53.464558  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:53.464600  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.030905  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:56.046625  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:56.046709  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:56.090035  360776 cri.go:89] found id: ""
	I0229 02:17:56.090066  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.090094  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:56.090103  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:56.090176  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:56.158245  360776 cri.go:89] found id: ""
	I0229 02:17:56.158276  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.158289  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:56.158297  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:56.158378  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:56.203917  360776 cri.go:89] found id: ""
	I0229 02:17:56.203947  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.203959  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:56.203967  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:56.204037  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:56.267950  360776 cri.go:89] found id: ""
	I0229 02:17:56.267978  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.267995  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:56.268003  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:56.268065  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:56.312936  360776 cri.go:89] found id: ""
	I0229 02:17:56.312967  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.312979  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:56.312987  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:56.313050  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:56.357548  360776 cri.go:89] found id: ""
	I0229 02:17:56.357584  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.357596  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:56.357605  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:56.357674  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:56.401842  360776 cri.go:89] found id: ""
	I0229 02:17:56.401876  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.401890  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:56.401898  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:56.401965  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:56.448506  360776 cri.go:89] found id: ""
	I0229 02:17:56.448538  360776 logs.go:276] 0 containers: []
	W0229 02:17:56.448549  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:56.448562  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:56.448578  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:56.498783  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:56.498821  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:56.516722  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:56.516768  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:56.601770  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:56.601797  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:56.601815  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:56.642969  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:56.643010  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:17:53.212697  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:53.712843  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:54.212762  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:54.713449  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:55.213612  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:55.712707  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:56.213635  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:56.713158  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.213615  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.713426  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:57.378120  360217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.606758107s)
	I0229 02:17:57.378252  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:57.396898  360217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:17:57.409107  360217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:17:57.420877  360217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:17:57.420927  360217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:17:57.486066  360217 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:17:57.486157  360217 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:17:57.660083  360217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:17:57.660277  360217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:17:57.660395  360217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:17:57.916360  360217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:17:58.213116  360079 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:17:58.349580  360079 kubeadm.go:1088] duration metric: took 11.984450803s to wait for elevateKubeSystemPrivileges.
	I0229 02:17:58.349651  360079 kubeadm.go:406] StartCluster complete in 4m58.053023709s
	I0229 02:17:58.349775  360079 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:17:58.349948  360079 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:17:58.351856  360079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:17:58.352191  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:17:58.352353  360079 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:17:58.352434  360079 addons.go:69] Setting storage-provisioner=true in profile "no-preload-907398"
	I0229 02:17:58.352462  360079 addons.go:234] Setting addon storage-provisioner=true in "no-preload-907398"
	W0229 02:17:58.352474  360079 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:17:58.352492  360079 config.go:182] Loaded profile config "no-preload-907398": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 02:17:58.352546  360079 addons.go:69] Setting default-storageclass=true in profile "no-preload-907398"
	I0229 02:17:58.352600  360079 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-907398"
	I0229 02:17:58.352615  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353032  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353043  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353052  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353068  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353120  360079 addons.go:69] Setting metrics-server=true in profile "no-preload-907398"
	I0229 02:17:58.353134  360079 addons.go:234] Setting addon metrics-server=true in "no-preload-907398"
	W0229 02:17:58.353141  360079 addons.go:243] addon metrics-server should already be in state true
	I0229 02:17:58.353182  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353351  360079 addons.go:69] Setting dashboard=true in profile "no-preload-907398"
	I0229 02:17:58.353372  360079 addons.go:234] Setting addon dashboard=true in "no-preload-907398"
	W0229 02:17:58.353379  360079 addons.go:243] addon dashboard should already be in state true
	I0229 02:17:58.353416  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.353501  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353521  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.353780  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.353802  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.374370  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0229 02:17:58.374457  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0229 02:17:58.374503  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0229 02:17:58.374564  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34767
	I0229 02:17:58.375443  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375468  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375533  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375559  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.375998  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376013  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376104  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376118  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376153  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376166  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376242  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.376255  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.376604  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.376608  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.376642  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.377147  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377181  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.377256  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377274  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.377339  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.377532  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.377723  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.377754  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.380332  360079 addons.go:234] Setting addon default-storageclass=true in "no-preload-907398"
	W0229 02:17:58.380348  360079 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:17:58.380373  360079 host.go:66] Checking if "no-preload-907398" exists ...
	I0229 02:17:58.380607  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.380620  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.399601  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0229 02:17:58.400286  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.400514  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I0229 02:17:58.401167  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.401184  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.401173  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.401760  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.402030  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.402970  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	W0229 02:17:58.403287  360079 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "no-preload-907398" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 02:17:58.403312  360079 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 02:17:58.403338  360079 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:17:58.405226  360079 out.go:177] * Verifying Kubernetes components...
	I0229 02:17:58.403538  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.403723  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.404198  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.406627  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.406718  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:17:58.412539  360079 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:17:58.407373  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.407398  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.414311  360079 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:17:58.414334  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:17:58.414352  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.412590  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.412844  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.413706  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0229 02:17:58.415059  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.415498  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.417082  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.417438  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.418583  360079 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:17:58.419735  360079 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:17:58.420843  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:17:58.420858  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:17:58.420876  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.418780  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.420946  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.420968  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.418281  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.422030  360079 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:17:57.917746  360217 out.go:204]   - Generating certificates and keys ...
	I0229 02:17:57.917859  360217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:17:57.917965  360217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:17:57.918411  360217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:17:57.918918  360217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:17:57.919445  360217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:17:57.919873  360217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:17:57.920396  360217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:17:57.920807  360217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:17:57.921322  360217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:17:57.921710  360217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:17:57.922094  360217 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:17:57.922176  360217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:17:58.103086  360217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:17:58.146435  360217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:17:58.422571  360217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:17:58.544422  360217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:17:58.545127  360217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:17:58.547666  360217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:17:58.549247  360217 out.go:204]   - Booting up control plane ...
	I0229 02:17:58.549352  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:17:58.549459  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:17:58.550242  360217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:17:58.577890  360217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:17:58.579022  360217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:17:58.579096  360217 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:17:58.733877  360217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:17:56.311800  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:58.809250  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:17:58.419456  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.421615  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.423246  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.423335  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:17:58.423343  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:17:58.423357  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.424461  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.424633  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.424741  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.424781  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.425249  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.425315  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.425145  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.425622  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.425732  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.425865  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.426305  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.430169  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.430190  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.430213  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.430221  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.430491  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.430917  360079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:17:58.430946  360079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:17:58.431346  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.431541  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.448561  360079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0229 02:17:58.449216  360079 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:17:58.449840  360079 main.go:141] libmachine: Using API Version  1
	I0229 02:17:58.449868  360079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:17:58.450301  360079 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:17:58.450574  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetState
	I0229 02:17:58.452414  360079 main.go:141] libmachine: (no-preload-907398) Calling .DriverName
	I0229 02:17:58.452680  360079 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:17:58.452696  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:17:58.452714  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHHostname
	I0229 02:17:58.455680  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.456155  360079 main.go:141] libmachine: (no-preload-907398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:70:9e", ip: ""} in network mk-no-preload-907398: {Iface:virbr1 ExpiryTime:2024-02-29 03:12:50 +0000 UTC Type:0 Mac:52:54:00:c8:70:9e Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:no-preload-907398 Clientid:01:52:54:00:c8:70:9e}
	I0229 02:17:58.456179  360079 main.go:141] libmachine: (no-preload-907398) DBG | domain no-preload-907398 has defined IP address 192.168.61.150 and MAC address 52:54:00:c8:70:9e in network mk-no-preload-907398
	I0229 02:17:58.456414  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHPort
	I0229 02:17:58.456600  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHKeyPath
	I0229 02:17:58.456726  360079 main.go:141] libmachine: (no-preload-907398) Calling .GetSSHUsername
	I0229 02:17:58.457041  360079 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/no-preload-907398/id_rsa Username:docker}
	I0229 02:17:58.560024  360079 node_ready.go:35] waiting up to 6m0s for node "no-preload-907398" to be "Ready" ...
	I0229 02:17:58.560149  360079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:17:58.562721  360079 node_ready.go:49] node "no-preload-907398" has status "Ready":"True"
	I0229 02:17:58.562749  360079 node_ready.go:38] duration metric: took 2.693389ms waiting for node "no-preload-907398" to be "Ready" ...
	I0229 02:17:58.562767  360079 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:58.568960  360079 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.583361  360079 pod_ready.go:92] pod "etcd-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.583392  360079 pod_ready.go:81] duration metric: took 14.411119ms waiting for pod "etcd-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.583408  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.612395  360079 pod_ready.go:92] pod "kube-apiserver-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.612430  360079 pod_ready.go:81] duration metric: took 29.012395ms waiting for pod "kube-apiserver-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.612444  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.624710  360079 pod_ready.go:92] pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.624742  360079 pod_ready.go:81] duration metric: took 12.287509ms waiting for pod "kube-controller-manager-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.624755  360079 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.635770  360079 pod_ready.go:92] pod "kube-scheduler-no-preload-907398" in "kube-system" namespace has status "Ready":"True"
	I0229 02:17:58.635801  360079 pod_ready.go:81] duration metric: took 11.037539ms waiting for pod "kube-scheduler-no-preload-907398" in "kube-system" namespace to be "Ready" ...
	I0229 02:17:58.635813  360079 pod_ready.go:38] duration metric: took 73.031722ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:17:58.635837  360079 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:17:58.635901  360079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:58.706760  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:17:58.712477  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:17:58.747607  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:17:58.747647  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:17:58.782941  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:17:58.782966  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:17:58.861056  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:17:58.861086  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:17:58.914123  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:17:58.914153  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:17:58.977830  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:17:58.977864  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:17:59.075704  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:17:59.075734  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:17:59.087287  360079 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:17:59.087318  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:17:59.208828  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:17:59.208860  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:17:59.244139  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:17:59.335848  360079 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:17:59.335882  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:17:59.335906  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:17:59.335928  360079 api_server.go:72] duration metric: took 932.545738ms to wait for apiserver process to appear ...
	I0229 02:17:59.335948  360079 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:17:59.335972  360079 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0229 02:17:59.385781  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:17:59.385818  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:17:59.446518  360079 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0229 02:17:59.448251  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:17:59.448278  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:17:59.480111  360079 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:17:59.480149  360079 api_server.go:131] duration metric: took 144.191444ms to wait for apiserver health ...
	I0229 02:17:59.480161  360079 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:17:59.524432  360079 system_pods.go:59] 7 kube-system pods found
	I0229 02:17:59.524474  360079 system_pods.go:61] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending
	I0229 02:17:59.524481  360079 system_pods.go:61] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending
	I0229 02:17:59.524486  360079 system_pods.go:61] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.524492  360079 system_pods.go:61] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.524499  360079 system_pods.go:61] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.524508  360079 system_pods.go:61] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.524514  360079 system_pods.go:61] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.524526  360079 system_pods.go:74] duration metric: took 44.35791ms to wait for pod list to return data ...
	I0229 02:17:59.524539  360079 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:17:59.556701  360079 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:17:59.556744  360079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:17:59.586815  360079 default_sa.go:45] found service account: "default"
	I0229 02:17:59.586867  360079 default_sa.go:55] duration metric: took 62.31539ms for default service account to be created ...
	I0229 02:17:59.586883  360079 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:17:59.613376  360079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:17:59.661179  360079 system_pods.go:86] 7 kube-system pods found
	I0229 02:17:59.661281  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending
	I0229 02:17:59.661305  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.661322  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.661342  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.661358  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.661376  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.661392  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.661424  360079 retry.go:31] will retry after 225.195811ms: missing components: kube-dns, kube-proxy
	I0229 02:17:59.900439  360079 system_pods.go:86] 7 kube-system pods found
	I0229 02:17:59.900490  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.900539  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:17:59.900555  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:17:59.900563  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:17:59.900576  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:17:59.900587  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:17:59.900597  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:17:59.900620  360079 retry.go:31] will retry after 348.416029ms: missing components: kube-dns, kube-proxy
	I0229 02:18:00.221814  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.509290892s)
	I0229 02:18:00.221894  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.221910  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.221939  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.515133599s)
	I0229 02:18:00.221984  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.221998  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.222483  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.222513  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.222695  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.222753  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.222784  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.222801  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.223074  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.223113  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.224083  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.224104  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.224115  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.224123  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.224355  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.224402  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.224415  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.254073  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:00.254130  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:00.256526  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:00.256546  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:00.256576  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:00.281620  360079 system_pods.go:86] 8 kube-system pods found
	I0229 02:18:00.281652  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.281658  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.281664  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:00.281671  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:00.281676  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:00.281681  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:00.281685  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:00.281695  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:00.281717  360079 retry.go:31] will retry after 374.602979ms: missing components: kube-dns, kube-proxy
	I0229 02:18:00.701978  360079 system_pods.go:86] 8 kube-system pods found
	I0229 02:18:00.702028  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.702039  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:00.702048  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:00.702059  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:00.702066  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:00.702075  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:00.702094  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:00.702107  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:00.702131  360079 retry.go:31] will retry after 563.29938ms: missing components: kube-dns, kube-proxy
	I0229 02:18:01.275888  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.031696303s)
	I0229 02:18:01.275958  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:01.275973  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:01.276375  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:01.276422  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:01.276435  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:01.276448  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:01.276473  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:01.276898  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:01.276957  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:01.277012  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:01.277032  360079 addons.go:470] Verifying addon metrics-server=true in "no-preload-907398"
	I0229 02:18:01.286612  360079 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:01.286655  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:01.286668  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:18:01.286676  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:01.286686  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:01.286697  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:01.286706  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:18:01.286716  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:01.286726  360079 system_pods.go:89] "metrics-server-57f55c9bc5-hln75" [8bfb6800-10c6-4154-8311-e568c1e146d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:01.286745  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:01.286772  360079 retry.go:31] will retry after 523.32187ms: missing components: kube-dns, kube-proxy
	I0229 02:18:01.829847  360079 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:01.829894  360079 system_pods.go:89] "coredns-76f75df574-lh6fx" [092a35bd-da64-465c-aac2-2db805ba942f] Running
	I0229 02:18:01.829905  360079 system_pods.go:89] "coredns-76f75df574-s7wgl" [edf9b2e4-e3ae-44b4-9561-722b74c19032] Running
	I0229 02:18:01.829912  360079 system_pods.go:89] "etcd-no-preload-907398" [90447e7e-0f30-4a5c-87d2-2ea1afc92dbe] Running
	I0229 02:18:01.829924  360079 system_pods.go:89] "kube-apiserver-no-preload-907398" [14240b57-03a5-4746-a246-28a338d7dbc1] Running
	I0229 02:18:01.829932  360079 system_pods.go:89] "kube-controller-manager-no-preload-907398" [45c6c2c5-ac84-4fc2-8981-e2f2ae6d3212] Running
	I0229 02:18:01.829938  360079 system_pods.go:89] "kube-proxy-r95w4" [32ff71b3-9287-4afa-9743-0e5e9068fa6d] Running
	I0229 02:18:01.829944  360079 system_pods.go:89] "kube-scheduler-no-preload-907398" [db7fbfcd-11c3-4ffb-a4c0-5e57ec7e069f] Running
	I0229 02:18:01.829957  360079 system_pods.go:89] "metrics-server-57f55c9bc5-hln75" [8bfb6800-10c6-4154-8311-e568c1e146d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:01.829967  360079 system_pods.go:89] "storage-provisioner" [e1001a81-038a-4eef-8b9f-8659058fa9c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:01.829989  360079 system_pods.go:126] duration metric: took 2.243096892s to wait for k8s-apps to be running ...
	I0229 02:18:01.830005  360079 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:18:01.830091  360079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:02.189987  360079 system_svc.go:56] duration metric: took 359.972364ms WaitForService to wait for kubelet.
	I0229 02:18:02.190024  360079 kubeadm.go:581] duration metric: took 3.786642999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:18:02.190050  360079 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:18:02.190227  360079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.576785344s)
	I0229 02:18:02.190281  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:02.190299  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:02.190727  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:02.190798  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:02.190810  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:02.190819  360079 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:02.190827  360079 main.go:141] libmachine: (no-preload-907398) Calling .Close
	I0229 02:18:02.193012  360079 main.go:141] libmachine: (no-preload-907398) DBG | Closing plugin on server side
	I0229 02:18:02.193025  360079 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:02.193062  360079 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:02.194791  360079 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-907398 addons enable metrics-server
	
	I0229 02:18:02.196317  360079 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0229 02:18:02.197863  360079 addons.go:505] enable addons completed in 3.84551804s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0229 02:18:02.210831  360079 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:18:02.210859  360079 node_conditions.go:123] node cpu capacity is 2
	I0229 02:18:02.210871  360079 node_conditions.go:105] duration metric: took 20.81411ms to run NodePressure ...
	I0229 02:18:02.210885  360079 start.go:228] waiting for startup goroutines ...
	I0229 02:18:02.210894  360079 start.go:233] waiting for cluster config update ...
	I0229 02:18:02.210911  360079 start.go:242] writing updated cluster config ...
	I0229 02:18:02.211195  360079 ssh_runner.go:195] Run: rm -f paused
	I0229 02:18:02.271875  360079 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:18:02.273687  360079 out.go:177] * Done! kubectl is now configured to use "no-preload-907398" cluster and "default" namespace by default
	I0229 02:17:59.194448  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:17:59.212378  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:17:59.212455  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:17:59.272835  360776 cri.go:89] found id: ""
	I0229 02:17:59.272864  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.272873  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:17:59.272879  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:17:59.272945  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:17:59.326044  360776 cri.go:89] found id: ""
	I0229 02:17:59.326097  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.326110  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:17:59.326119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:17:59.326195  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:17:59.375112  360776 cri.go:89] found id: ""
	I0229 02:17:59.375147  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.375158  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:17:59.375165  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:17:59.375231  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:17:59.423465  360776 cri.go:89] found id: ""
	I0229 02:17:59.423489  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.423498  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:17:59.423504  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:17:59.423564  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:17:59.464386  360776 cri.go:89] found id: ""
	I0229 02:17:59.464416  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.464427  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:17:59.464433  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:17:59.464493  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:17:59.507714  360776 cri.go:89] found id: ""
	I0229 02:17:59.507746  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.507759  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:17:59.507768  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:17:59.507836  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:17:59.563729  360776 cri.go:89] found id: ""
	I0229 02:17:59.563761  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.563773  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:17:59.563781  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:17:59.563869  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:17:59.623366  360776 cri.go:89] found id: ""
	I0229 02:17:59.623392  360776 logs.go:276] 0 containers: []
	W0229 02:17:59.623404  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:17:59.623417  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:17:59.623432  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:17:59.700723  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:17:59.700783  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:17:59.722858  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:17:59.722904  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:17:59.830864  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:17:59.830892  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:17:59.830908  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:17:59.881944  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:17:59.881996  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:00.814212  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:03.310396  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:05.240170  360217 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.506059 seconds
	I0229 02:18:05.240365  360217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:18:05.258467  360217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:18:05.790274  360217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:18:05.790547  360217 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-254367 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:18:06.306317  360217 kubeadm.go:322] [bootstrap-token] Using token: up9wo1.za7nj6xpc5l7gy5b
	I0229 02:18:06.308235  360217 out.go:204]   - Configuring RBAC rules ...
	I0229 02:18:06.308376  360217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:18:06.317348  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:18:06.328386  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:18:06.333738  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:18:06.338257  360217 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:18:06.342124  360217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:18:06.357763  360217 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:18:06.667301  360217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:18:06.893898  360217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:18:06.900021  360217 kubeadm.go:322] 
	I0229 02:18:06.900123  360217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:18:06.900136  360217 kubeadm.go:322] 
	I0229 02:18:06.900244  360217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:18:06.900251  360217 kubeadm.go:322] 
	I0229 02:18:06.900282  360217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:18:06.900361  360217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:18:06.900422  360217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:18:06.900428  360217 kubeadm.go:322] 
	I0229 02:18:06.900491  360217 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:18:06.900505  360217 kubeadm.go:322] 
	I0229 02:18:06.900564  360217 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:18:06.900570  360217 kubeadm.go:322] 
	I0229 02:18:06.900633  360217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:18:06.900725  360217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:18:06.900814  360217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:18:06.900832  360217 kubeadm.go:322] 
	I0229 02:18:06.900935  360217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:18:06.901029  360217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:18:06.901038  360217 kubeadm.go:322] 
	I0229 02:18:06.901139  360217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token up9wo1.za7nj6xpc5l7gy5b \
	I0229 02:18:06.901267  360217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:18:06.901296  360217 kubeadm.go:322] 	--control-plane 
	I0229 02:18:06.901302  360217 kubeadm.go:322] 
	I0229 02:18:06.901439  360217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:18:06.901447  360217 kubeadm.go:322] 
	I0229 02:18:06.901554  360217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token up9wo1.za7nj6xpc5l7gy5b \
	I0229 02:18:06.901681  360217 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:18:06.904775  360217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:18:06.904839  360217 cni.go:84] Creating CNI manager for ""
	I0229 02:18:06.904862  360217 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:18:06.906658  360217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:18:02.462408  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:02.485957  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:02.486017  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:02.540769  360776 cri.go:89] found id: ""
	I0229 02:18:02.540803  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.540814  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:02.540834  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:02.540902  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:02.584488  360776 cri.go:89] found id: ""
	I0229 02:18:02.584514  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.584525  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:02.584532  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:02.584601  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:02.644908  360776 cri.go:89] found id: ""
	I0229 02:18:02.644943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.644956  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:02.644963  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:02.645031  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:02.702464  360776 cri.go:89] found id: ""
	I0229 02:18:02.702498  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.702510  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:02.702519  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:02.702587  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:02.754980  360776 cri.go:89] found id: ""
	I0229 02:18:02.755008  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.755020  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:02.755029  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:02.755101  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:02.807863  360776 cri.go:89] found id: ""
	I0229 02:18:02.807890  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.807901  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:02.807908  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:02.807964  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:02.850910  360776 cri.go:89] found id: ""
	I0229 02:18:02.850943  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.850956  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:02.850964  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:02.851034  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:02.895792  360776 cri.go:89] found id: ""
	I0229 02:18:02.895832  360776 logs.go:276] 0 containers: []
	W0229 02:18:02.895844  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:02.895857  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:02.895874  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:02.951353  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:02.951399  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:02.970262  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:02.970303  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:03.055141  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:03.055165  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:03.055182  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:03.091751  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:03.091791  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:05.646070  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:05.663225  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:05.663301  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:05.712565  360776 cri.go:89] found id: ""
	I0229 02:18:05.712604  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.712623  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:05.712632  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:05.712697  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:05.761656  360776 cri.go:89] found id: ""
	I0229 02:18:05.761685  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.761699  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:05.761715  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:05.761780  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:05.805264  360776 cri.go:89] found id: ""
	I0229 02:18:05.805299  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.805310  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:05.805318  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:05.805382  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:05.853483  360776 cri.go:89] found id: ""
	I0229 02:18:05.853555  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.853569  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:05.853578  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:05.853653  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:05.894561  360776 cri.go:89] found id: ""
	I0229 02:18:05.894589  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.894608  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:05.894616  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:05.894680  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:05.937784  360776 cri.go:89] found id: ""
	I0229 02:18:05.937816  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.937825  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:05.937832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:05.937900  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:05.982000  360776 cri.go:89] found id: ""
	I0229 02:18:05.982028  360776 logs.go:276] 0 containers: []
	W0229 02:18:05.982039  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:05.982046  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:05.982136  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:06.025395  360776 cri.go:89] found id: ""
	I0229 02:18:06.025430  360776 logs.go:276] 0 containers: []
	W0229 02:18:06.025443  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:06.025455  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:06.025470  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:06.078175  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:06.078221  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:06.106042  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:06.106097  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:06.233485  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:06.233506  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:06.233522  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:06.273517  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:06.273557  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:06.908321  360217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:18:06.928907  360217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:18:06.976992  360217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:18:06.977068  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:06.977074  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-254367 minikube.k8s.io/updated_at=2024_02_29T02_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:07.053045  360217 ops.go:34] apiserver oom_adj: -16
	I0229 02:18:07.339410  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:07.840356  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:08.340151  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:08.840168  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:05.809727  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:08.311572  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:08.827599  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:08.845166  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:08.845270  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:08.891258  360776 cri.go:89] found id: ""
	I0229 02:18:08.891291  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.891303  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:08.891311  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:08.891381  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:08.936833  360776 cri.go:89] found id: ""
	I0229 02:18:08.936868  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.936879  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:08.936888  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:08.936962  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:08.979759  360776 cri.go:89] found id: ""
	I0229 02:18:08.979788  360776 logs.go:276] 0 containers: []
	W0229 02:18:08.979800  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:08.979812  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:08.979878  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:09.023686  360776 cri.go:89] found id: ""
	I0229 02:18:09.023722  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.023734  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:09.023744  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:09.023817  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:09.068374  360776 cri.go:89] found id: ""
	I0229 02:18:09.068413  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.068426  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:09.068434  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:09.068502  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:09.147948  360776 cri.go:89] found id: ""
	I0229 02:18:09.147976  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.147985  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:09.147991  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:09.148043  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:09.202491  360776 cri.go:89] found id: ""
	I0229 02:18:09.202522  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.202534  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:09.202542  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:09.202605  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:09.248957  360776 cri.go:89] found id: ""
	I0229 02:18:09.248992  360776 logs.go:276] 0 containers: []
	W0229 02:18:09.249005  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:09.249018  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:09.249038  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:09.318433  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:09.318476  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:09.335205  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:09.335240  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:09.417917  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:09.417952  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:09.417969  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:09.464739  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:09.464779  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:12.017825  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:12.033452  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:12.033518  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:12.082587  360776 cri.go:89] found id: ""
	I0229 02:18:12.082621  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.082634  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:12.082642  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:12.082714  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:12.132662  360776 cri.go:89] found id: ""
	I0229 02:18:12.132696  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.132717  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:12.132725  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:12.132795  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:12.204316  360776 cri.go:89] found id: ""
	I0229 02:18:12.204343  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.204351  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:12.204357  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:12.204417  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:12.255146  360776 cri.go:89] found id: ""
	I0229 02:18:12.255178  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.255190  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:12.255198  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:12.255265  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:12.299280  360776 cri.go:89] found id: ""
	I0229 02:18:12.299314  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.299328  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:12.299337  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:12.299410  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:12.340621  360776 cri.go:89] found id: ""
	I0229 02:18:12.340646  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.340658  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:12.340667  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:12.340722  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:09.339996  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:09.839471  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.340401  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.839457  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:11.340046  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:11.839746  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:12.339889  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:12.839469  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:13.339676  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:13.840012  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:10.809010  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:13.307420  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:12.391888  360776 cri.go:89] found id: ""
	I0229 02:18:12.391926  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.391938  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:12.391945  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:12.392010  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:12.440219  360776 cri.go:89] found id: ""
	I0229 02:18:12.440250  360776 logs.go:276] 0 containers: []
	W0229 02:18:12.440263  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:12.440276  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:12.440290  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:12.495586  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:12.495621  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:12.513608  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:12.513653  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:12.587894  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:12.587929  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:12.587956  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:12.625496  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:12.625533  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:15.187090  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:15.206990  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:15.207074  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:15.261493  360776 cri.go:89] found id: ""
	I0229 02:18:15.261522  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.261535  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:15.261543  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:15.261620  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:15.302408  360776 cri.go:89] found id: ""
	I0229 02:18:15.302437  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.302449  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:15.302457  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:15.302524  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:15.340553  360776 cri.go:89] found id: ""
	I0229 02:18:15.340580  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.340590  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:15.340598  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:15.340661  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:15.383659  360776 cri.go:89] found id: ""
	I0229 02:18:15.383688  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.383699  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:15.383708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:15.383777  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:15.433164  360776 cri.go:89] found id: ""
	I0229 02:18:15.433200  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.433212  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:15.433220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:15.433293  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:15.479950  360776 cri.go:89] found id: ""
	I0229 02:18:15.479993  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.480006  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:15.480014  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:15.480078  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:15.519601  360776 cri.go:89] found id: ""
	I0229 02:18:15.519628  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.519637  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:15.519644  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:15.519707  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:15.564564  360776 cri.go:89] found id: ""
	I0229 02:18:15.564598  360776 logs.go:276] 0 containers: []
	W0229 02:18:15.564610  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:15.564624  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:15.564643  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:15.615855  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:15.615894  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:15.632464  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:15.632505  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:15.713177  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:15.713198  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:15.713214  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:15.749296  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:15.749326  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:14.340255  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:14.839541  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:15.339620  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:15.840469  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:16.339540  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:16.840203  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:17.339841  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:17.839673  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:18.339956  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:18.839965  360217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:19.023067  360217 kubeadm.go:1088] duration metric: took 12.046075339s to wait for elevateKubeSystemPrivileges.
	I0229 02:18:19.023110  360217 kubeadm.go:406] StartCluster complete in 4m58.952060994s
	I0229 02:18:19.023136  360217 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:19.023240  360217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:18:19.025049  360217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:19.027123  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:18:19.027409  360217 config.go:182] Loaded profile config "default-k8s-diff-port-254367": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:18:19.027464  360217 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:18:19.027538  360217 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.027561  360217 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.027576  360217 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:18:19.027588  360217 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.027620  360217 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-254367"
	I0229 02:18:19.027628  360217 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-254367"
	W0229 02:18:19.027633  360217 addons.go:243] addon dashboard should already be in state true
	I0229 02:18:19.027642  360217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-254367"
	I0229 02:18:19.027681  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028079  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028088  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028108  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.028114  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.027623  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028343  360217 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-254367"
	I0229 02:18:19.028368  360217 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.028377  360217 addons.go:243] addon metrics-server should already be in state true
	I0229 02:18:19.028499  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028537  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.028563  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.028931  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.028959  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.047714  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0229 02:18:19.048288  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.048404  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0229 02:18:19.048502  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0229 02:18:19.048785  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.048915  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.049087  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049106  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049417  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049443  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049468  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.049605  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.049623  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.049632  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.049830  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.049990  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.050491  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.050525  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.050742  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.050780  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.052986  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0229 02:18:19.056042  360217 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-254367"
	W0229 02:18:19.056065  360217 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:18:19.056101  360217 host.go:66] Checking if "default-k8s-diff-port-254367" exists ...
	I0229 02:18:19.056338  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.056649  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.056674  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.057319  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.057403  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.058140  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.059410  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.059437  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.069542  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0229 02:18:19.069932  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.070411  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.070438  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.070747  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.070987  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.072429  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.074634  360217 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:18:19.076733  360217 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:18:19.078676  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:18:19.078702  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:18:19.078723  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.078731  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0229 02:18:19.078949  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I0229 02:18:19.079355  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.079753  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.080120  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.080143  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.080374  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.080389  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.080491  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.080718  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.080832  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.081012  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.082727  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.083018  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.084629  360217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:18:19.083192  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.083785  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.086324  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:18:19.086355  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.087244  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0229 02:18:19.087643  360217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:18:19.088961  360217 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:18:19.088981  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:18:19.089000  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.087691  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:18:19.089061  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.087724  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.087806  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.087943  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.089282  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.089425  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.090396  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.090419  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.090890  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:19.091717  360217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:18:19.091743  360217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:18:19.092187  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.092654  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.092677  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.092801  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.093024  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.093212  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.093402  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.093539  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.093806  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.093828  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.093851  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.093940  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.094226  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.094421  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	W0229 02:18:19.100332  360217 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-254367" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 02:18:19.100363  360217 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 02:18:19.100388  360217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:18:19.101941  360217 out.go:177] * Verifying Kubernetes components...
	I0229 02:18:19.103689  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:19.114276  360217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I0229 02:18:19.114684  360217 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:18:19.115166  360217 main.go:141] libmachine: Using API Version  1
	I0229 02:18:19.115190  360217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:18:19.115557  360217 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:18:15.308627  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:17.807561  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:19.808357  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:18.299689  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:18.315449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:18.315523  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:18.357310  360776 cri.go:89] found id: ""
	I0229 02:18:18.357347  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.357360  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:18.357369  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:18.357427  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:18.410178  360776 cri.go:89] found id: ""
	I0229 02:18:18.410212  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.410224  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:18.410232  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:18.410300  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:18.452273  360776 cri.go:89] found id: ""
	I0229 02:18:18.452303  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.452315  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:18.452330  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:18.452398  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:18.493134  360776 cri.go:89] found id: ""
	I0229 02:18:18.493161  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.493170  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:18.493176  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:18.493247  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:18.530812  360776 cri.go:89] found id: ""
	I0229 02:18:18.530843  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.530855  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:18.530864  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:18.530931  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:18.572183  360776 cri.go:89] found id: ""
	I0229 02:18:18.572216  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.572231  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:18.572240  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:18.572314  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:18.612117  360776 cri.go:89] found id: ""
	I0229 02:18:18.612148  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.612160  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:18.612169  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:18.612230  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:18.653827  360776 cri.go:89] found id: ""
	I0229 02:18:18.653855  360776 logs.go:276] 0 containers: []
	W0229 02:18:18.653866  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:18.653879  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:18.653898  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:18.688058  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:18.688094  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:18.735458  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:18.735493  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:18.795735  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:18.795780  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:18.816207  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:18.816239  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:18.928414  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:21.429284  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:21.445010  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:21.445084  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:21.484084  360776 cri.go:89] found id: ""
	I0229 02:18:21.484128  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.484141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:21.484159  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:21.484223  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:21.536516  360776 cri.go:89] found id: ""
	I0229 02:18:21.536550  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.536563  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:21.536571  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:21.536636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:21.588732  360776 cri.go:89] found id: ""
	I0229 02:18:21.588761  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.588773  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:21.588782  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:21.588843  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:21.644434  360776 cri.go:89] found id: ""
	I0229 02:18:21.644470  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.644483  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:21.644491  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:21.644560  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:21.685496  360776 cri.go:89] found id: ""
	I0229 02:18:21.685528  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.685540  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:21.685548  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:21.685615  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:21.741146  360776 cri.go:89] found id: ""
	I0229 02:18:21.741176  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.741188  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:21.741196  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:21.741287  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:21.790924  360776 cri.go:89] found id: ""
	I0229 02:18:21.790953  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.790964  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:21.790972  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:21.791040  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:21.843079  360776 cri.go:89] found id: ""
	I0229 02:18:21.843107  360776 logs.go:276] 0 containers: []
	W0229 02:18:21.843118  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:21.843131  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:21.843155  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:21.917006  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:21.917035  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:21.987268  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:21.987313  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:22.009660  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:22.009699  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:22.101976  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:22.102000  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:22.102017  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:19.115785  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetState
	I0229 02:18:19.118586  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .DriverName
	I0229 02:18:19.118869  360217 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:18:19.118886  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:18:19.118905  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHHostname
	I0229 02:18:19.121918  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.122332  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a9:36", ip: ""} in network mk-default-k8s-diff-port-254367: {Iface:virbr3 ExpiryTime:2024-02-29 03:09:29 +0000 UTC Type:0 Mac:52:54:00:52:a9:36 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:default-k8s-diff-port-254367 Clientid:01:52:54:00:52:a9:36}
	I0229 02:18:19.122364  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | domain default-k8s-diff-port-254367 has defined IP address 192.168.72.88 and MAC address 52:54:00:52:a9:36 in network mk-default-k8s-diff-port-254367
	I0229 02:18:19.122552  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHPort
	I0229 02:18:19.122770  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHKeyPath
	I0229 02:18:19.122996  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .GetSSHUsername
	I0229 02:18:19.123154  360217 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/default-k8s-diff-port-254367/id_rsa Username:docker}
	I0229 02:18:19.269274  360217 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-254367" to be "Ready" ...
	I0229 02:18:19.269550  360217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:18:19.282334  360217 node_ready.go:49] node "default-k8s-diff-port-254367" has status "Ready":"True"
	I0229 02:18:19.282362  360217 node_ready.go:38] duration metric: took 13.046941ms waiting for node "default-k8s-diff-port-254367" to be "Ready" ...
	I0229 02:18:19.282377  360217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:18:19.298326  360217 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.311217  360217 pod_ready.go:92] pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.311243  360217 pod_ready.go:81] duration metric: took 12.887306ms waiting for pod "etcd-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.311252  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.317185  360217 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.317210  360217 pod_ready.go:81] duration metric: took 5.951807ms waiting for pod "kube-apiserver-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.317219  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.330495  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:18:19.330519  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:18:19.331739  360217 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:19.331775  360217 pod_ready.go:81] duration metric: took 14.548327ms waiting for pod "kube-controller-manager-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.331791  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dlgmz" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:19.363610  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:18:19.461745  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:18:19.461779  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:18:19.467030  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:18:19.467234  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:18:19.467253  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:18:19.568507  360217 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:18:19.568540  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:18:19.641306  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:18:19.641346  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:18:19.750251  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:18:19.750282  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:18:19.807358  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:18:19.886145  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:18:19.886169  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:18:20.066662  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:18:20.066699  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:18:20.097965  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:18:20.097990  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:18:20.136049  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:18:20.136075  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:18:20.232757  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:18:20.232780  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:18:20.290653  360217 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:18:20.290679  360217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:18:20.359549  360217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:18:21.354053  360217 pod_ready.go:102] pod "kube-proxy-dlgmz" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:21.788753  360217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.519159841s)
	I0229 02:18:21.788798  360217 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0229 02:18:22.362286  360217 pod_ready.go:92] pod "kube-proxy-dlgmz" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:22.362318  360217 pod_ready.go:81] duration metric: took 3.030515197s waiting for pod "kube-proxy-dlgmz" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.362331  360217 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.392397  360217 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace has status "Ready":"True"
	I0229 02:18:22.392428  360217 pod_ready.go:81] duration metric: took 30.087397ms waiting for pod "kube-scheduler-default-k8s-diff-port-254367" in "kube-system" namespace to be "Ready" ...
	I0229 02:18:22.392441  360217 pod_ready.go:38] duration metric: took 3.110051734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:18:22.392462  360217 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:18:22.392516  360217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:22.755340  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.288276833s)
	I0229 02:18:22.755387  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755402  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755534  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.948137303s)
	I0229 02:18:22.755568  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755581  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755693  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.392056284s)
	I0229 02:18:22.755714  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.755723  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.755982  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.756023  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.756037  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.756047  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.756052  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.756327  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.756341  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.756357  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.756366  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.760172  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760183  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760221  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760234  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:22.760250  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760268  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760258  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760298  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760278  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760380  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.760390  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.760627  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.760646  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:22.760659  360217 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-254367"
	I0229 02:18:22.788927  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:22.788955  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:22.789219  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:22.789242  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.407247  360217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.047637799s)
	I0229 02:18:23.407257  360217 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.014711886s)
	I0229 02:18:23.407374  360217 api_server.go:72] duration metric: took 4.306954781s to wait for apiserver process to appear ...
	I0229 02:18:23.407399  360217 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:18:23.407433  360217 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8444/healthz ...
	I0229 02:18:23.407314  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:23.407545  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:23.407931  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:23.407948  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.407959  360217 main.go:141] libmachine: Making call to close driver server
	I0229 02:18:23.407967  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) Calling .Close
	I0229 02:18:23.408309  360217 main.go:141] libmachine: (default-k8s-diff-port-254367) DBG | Closing plugin on server side
	I0229 02:18:23.408318  360217 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:18:23.408331  360217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:18:23.411220  360217 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-254367 addons enable metrics-server
	
	I0229 02:18:23.412663  360217 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0229 02:18:23.414033  360217 addons.go:505] enable addons completed in 4.386557527s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0229 02:18:23.439279  360217 api_server.go:279] https://192.168.72.88:8444/healthz returned 200:
	ok
	I0229 02:18:23.443380  360217 api_server.go:141] control plane version: v1.28.4
	I0229 02:18:23.443419  360217 api_server.go:131] duration metric: took 36.010336ms to wait for apiserver health ...
	I0229 02:18:23.443434  360217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:18:23.459207  360217 system_pods.go:59] 9 kube-system pods found
	I0229 02:18:23.459239  360217 system_pods.go:61] "coredns-5dd5756b68-vsxcv" [f2cabd39-df55-4e81-85d3-a745eb5533c6] Running
	I0229 02:18:23.459246  360217 system_pods.go:61] "coredns-5dd5756b68-x6qjk" [3a4370e5-86c3-4c8b-b275-70e55da74256] Running
	I0229 02:18:23.459253  360217 system_pods.go:61] "etcd-default-k8s-diff-port-254367" [5f2c758b-5068-4138-b2c1-b4161802f59f] Running
	I0229 02:18:23.459259  360217 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-254367" [bfd63194-f697-48ec-a594-9fb43acd5c1c] Running
	I0229 02:18:23.459265  360217 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-254367" [817f802d-a424-425d-89ae-8cab6c34c18d] Running
	I0229 02:18:23.459271  360217 system_pods.go:61] "kube-proxy-dlgmz" [0d9e6b25-c506-43a6-b1d2-e3906fcf7b92] Running
	I0229 02:18:23.459277  360217 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-254367" [fd8b2ce6-a716-4aa4-b09d-c83b4c9c3b90] Running
	I0229 02:18:23.459288  360217 system_pods.go:61] "metrics-server-57f55c9bc5-2wc8d" [da2ffb04-58a1-476a-8ea2-5e8d33512c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:23.459296  360217 system_pods.go:61] "storage-provisioner" [0e031ad8-0a53-4aa3-9a00-e03078b0db2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:23.459314  360217 system_pods.go:74] duration metric: took 15.86958ms to wait for pod list to return data ...
	I0229 02:18:23.459329  360217 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:18:23.464125  360217 default_sa.go:45] found service account: "default"
	I0229 02:18:23.464196  360217 default_sa.go:55] duration metric: took 4.855817ms for default service account to be created ...
	I0229 02:18:23.464222  360217 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:18:23.471833  360217 system_pods.go:86] 9 kube-system pods found
	I0229 02:18:23.471861  360217 system_pods.go:89] "coredns-5dd5756b68-vsxcv" [f2cabd39-df55-4e81-85d3-a745eb5533c6] Running
	I0229 02:18:23.471869  360217 system_pods.go:89] "coredns-5dd5756b68-x6qjk" [3a4370e5-86c3-4c8b-b275-70e55da74256] Running
	I0229 02:18:23.471876  360217 system_pods.go:89] "etcd-default-k8s-diff-port-254367" [5f2c758b-5068-4138-b2c1-b4161802f59f] Running
	I0229 02:18:23.471883  360217 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-254367" [bfd63194-f697-48ec-a594-9fb43acd5c1c] Running
	I0229 02:18:23.471889  360217 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-254367" [817f802d-a424-425d-89ae-8cab6c34c18d] Running
	I0229 02:18:23.471896  360217 system_pods.go:89] "kube-proxy-dlgmz" [0d9e6b25-c506-43a6-b1d2-e3906fcf7b92] Running
	I0229 02:18:23.471908  360217 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-254367" [fd8b2ce6-a716-4aa4-b09d-c83b4c9c3b90] Running
	I0229 02:18:23.471917  360217 system_pods.go:89] "metrics-server-57f55c9bc5-2wc8d" [da2ffb04-58a1-476a-8ea2-5e8d33512c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:18:23.471927  360217 system_pods.go:89] "storage-provisioner" [0e031ad8-0a53-4aa3-9a00-e03078b0db2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:18:23.471943  360217 system_pods.go:126] duration metric: took 7.704603ms to wait for k8s-apps to be running ...
	I0229 02:18:23.471955  360217 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:18:23.472051  360217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:23.495777  360217 system_svc.go:56] duration metric: took 23.811126ms WaitForService to wait for kubelet.
	I0229 02:18:23.495810  360217 kubeadm.go:581] duration metric: took 4.395396941s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:18:23.495838  360217 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:18:23.502935  360217 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:18:23.502962  360217 node_conditions.go:123] node cpu capacity is 2
	I0229 02:18:23.502975  360217 node_conditions.go:105] duration metric: took 7.130297ms to run NodePressure ...
	I0229 02:18:23.502991  360217 start.go:228] waiting for startup goroutines ...
	I0229 02:18:23.503004  360217 start.go:233] waiting for cluster config update ...
	I0229 02:18:23.503019  360217 start.go:242] writing updated cluster config ...
	I0229 02:18:23.503329  360217 ssh_runner.go:195] Run: rm -f paused
	I0229 02:18:23.565856  360217 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:18:23.567626  360217 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-254367" cluster and "default" namespace by default
	I0229 02:18:21.812768  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:24.310049  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:24.648787  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:24.663511  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:24.663574  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:24.702299  360776 cri.go:89] found id: ""
	I0229 02:18:24.702329  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.702342  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:24.702349  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:24.702414  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:24.741664  360776 cri.go:89] found id: ""
	I0229 02:18:24.741696  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.741708  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:24.741720  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:24.741782  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:24.809755  360776 cri.go:89] found id: ""
	I0229 02:18:24.809788  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.809799  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:24.809807  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:24.809867  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:24.850308  360776 cri.go:89] found id: ""
	I0229 02:18:24.850335  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.850344  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:24.850351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:24.850408  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:24.903507  360776 cri.go:89] found id: ""
	I0229 02:18:24.903539  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.903551  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:24.903559  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:24.903624  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:24.952996  360776 cri.go:89] found id: ""
	I0229 02:18:24.953026  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.953039  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:24.953048  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:24.953119  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:24.999301  360776 cri.go:89] found id: ""
	I0229 02:18:24.999334  360776 logs.go:276] 0 containers: []
	W0229 02:18:24.999347  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:24.999355  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:24.999418  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:25.044310  360776 cri.go:89] found id: ""
	I0229 02:18:25.044350  360776 logs.go:276] 0 containers: []
	W0229 02:18:25.044362  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:25.044375  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:25.044391  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:25.091374  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:25.091407  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:25.109080  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:25.109118  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:25.186611  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:25.186639  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:25.186663  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:25.226779  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:25.226825  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:26.320759  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:28.807091  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:27.775896  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:27.789596  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:27.789662  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:27.834159  360776 cri.go:89] found id: ""
	I0229 02:18:27.834186  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.834198  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:27.834207  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:27.834278  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:27.887355  360776 cri.go:89] found id: ""
	I0229 02:18:27.887386  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.887398  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:27.887407  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:27.887481  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:27.927671  360776 cri.go:89] found id: ""
	I0229 02:18:27.927710  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.927724  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:27.927740  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:27.927819  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:27.983438  360776 cri.go:89] found id: ""
	I0229 02:18:27.983471  360776 logs.go:276] 0 containers: []
	W0229 02:18:27.983484  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:27.983493  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:27.983562  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:28.026112  360776 cri.go:89] found id: ""
	I0229 02:18:28.026143  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.026156  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:28.026238  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:28.026310  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:28.069085  360776 cri.go:89] found id: ""
	I0229 02:18:28.069118  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.069130  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:28.069138  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:28.069285  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:28.115010  360776 cri.go:89] found id: ""
	I0229 02:18:28.115037  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.115046  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:28.115051  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:28.115113  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:28.157726  360776 cri.go:89] found id: ""
	I0229 02:18:28.157756  360776 logs.go:276] 0 containers: []
	W0229 02:18:28.157769  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:28.157783  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:28.157800  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:28.218148  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:28.218196  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:28.238106  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:28.238142  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:28.328947  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:28.328971  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:28.328988  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:28.364795  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:28.364831  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:30.914422  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:30.929248  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:30.929334  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:30.983535  360776 cri.go:89] found id: ""
	I0229 02:18:30.983566  360776 logs.go:276] 0 containers: []
	W0229 02:18:30.983577  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:30.983585  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:30.983644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:31.037809  360776 cri.go:89] found id: ""
	I0229 02:18:31.037842  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.037853  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:31.037862  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:31.037933  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:31.089101  360776 cri.go:89] found id: ""
	I0229 02:18:31.089134  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.089146  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:31.089154  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:31.089219  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:31.139413  360776 cri.go:89] found id: ""
	I0229 02:18:31.139444  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.139456  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:31.139463  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:31.139542  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:31.177185  360776 cri.go:89] found id: ""
	I0229 02:18:31.177214  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.177223  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:31.177229  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:31.177295  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:31.221339  360776 cri.go:89] found id: ""
	I0229 02:18:31.221374  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.221387  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:31.221395  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:31.221461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:31.261770  360776 cri.go:89] found id: ""
	I0229 02:18:31.261803  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.261815  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:31.261824  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:31.261895  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:31.309126  360776 cri.go:89] found id: ""
	I0229 02:18:31.309157  360776 logs.go:276] 0 containers: []
	W0229 02:18:31.309168  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:31.309179  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:31.309193  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:31.362509  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:31.362552  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:31.379334  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:31.379383  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:31.471339  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:31.471359  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:31.471372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:31.511126  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:31.511172  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:30.808454  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:33.308106  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:34.063372  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:34.077222  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:34.077297  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:34.116752  360776 cri.go:89] found id: ""
	I0229 02:18:34.116793  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.116806  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:34.116815  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:34.116880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:34.157658  360776 cri.go:89] found id: ""
	I0229 02:18:34.157689  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.157700  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:34.157708  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:34.157779  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:34.199922  360776 cri.go:89] found id: ""
	I0229 02:18:34.199957  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.199969  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:34.199977  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:34.200044  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:34.242474  360776 cri.go:89] found id: ""
	I0229 02:18:34.242505  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.242517  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:34.242526  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:34.242585  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:34.289308  360776 cri.go:89] found id: ""
	I0229 02:18:34.289338  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.289360  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:34.289367  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:34.289431  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:34.335947  360776 cri.go:89] found id: ""
	I0229 02:18:34.335985  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.335997  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:34.336005  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:34.336073  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:34.377048  360776 cri.go:89] found id: ""
	I0229 02:18:34.377085  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.377097  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:34.377107  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:34.377181  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:34.424208  360776 cri.go:89] found id: ""
	I0229 02:18:34.424238  360776 logs.go:276] 0 containers: []
	W0229 02:18:34.424250  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:34.424270  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:34.424288  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:34.500223  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:34.500245  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:34.500263  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:34.534652  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:34.534688  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:34.593369  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:34.593405  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:34.646940  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:34.646982  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.169523  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:37.184168  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:37.184245  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:37.232979  360776 cri.go:89] found id: ""
	I0229 02:18:37.233015  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.233026  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:37.233037  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:37.233110  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:37.275771  360776 cri.go:89] found id: ""
	I0229 02:18:37.275796  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.275805  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:37.275811  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:37.275877  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:37.322421  360776 cri.go:89] found id: ""
	I0229 02:18:37.322451  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.322460  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:37.322466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:37.322525  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:35.807858  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:38.307264  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:37.366974  360776 cri.go:89] found id: ""
	I0229 02:18:37.367001  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.367011  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:37.367020  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:37.367080  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:37.408780  360776 cri.go:89] found id: ""
	I0229 02:18:37.408811  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.408822  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:37.408828  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:37.408880  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:37.447402  360776 cri.go:89] found id: ""
	I0229 02:18:37.447429  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.447441  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:37.447449  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:37.447511  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:37.486454  360776 cri.go:89] found id: ""
	I0229 02:18:37.486491  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.486502  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:37.486510  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:37.486579  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:37.531484  360776 cri.go:89] found id: ""
	I0229 02:18:37.531517  360776 logs.go:276] 0 containers: []
	W0229 02:18:37.531533  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:37.531545  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:37.531562  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:37.581274  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:37.581312  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:37.601745  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:37.601777  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:37.707773  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:37.707801  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:37.707818  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:37.740658  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:37.740698  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.296427  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:40.311365  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:40.311439  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:40.354647  360776 cri.go:89] found id: ""
	I0229 02:18:40.354675  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.354693  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:40.354701  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:40.354769  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:40.400490  360776 cri.go:89] found id: ""
	I0229 02:18:40.400520  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.400529  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:40.400535  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:40.400602  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:40.442029  360776 cri.go:89] found id: ""
	I0229 02:18:40.442051  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.442060  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:40.442065  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:40.442169  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:40.481183  360776 cri.go:89] found id: ""
	I0229 02:18:40.481216  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.481228  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:40.481237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:40.481316  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:40.523076  360776 cri.go:89] found id: ""
	I0229 02:18:40.523104  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.523113  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:40.523118  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:40.523209  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:40.561787  360776 cri.go:89] found id: ""
	I0229 02:18:40.561817  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.561826  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:40.561832  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:40.561908  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:40.598621  360776 cri.go:89] found id: ""
	I0229 02:18:40.598647  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.598655  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:40.598662  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:40.598710  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:40.637701  360776 cri.go:89] found id: ""
	I0229 02:18:40.637734  360776 logs.go:276] 0 containers: []
	W0229 02:18:40.637745  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:40.637758  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:40.637775  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:40.685317  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:40.685351  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:40.735348  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:40.735386  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:40.751373  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:40.751434  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:40.822604  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:40.822624  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:40.822637  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:40.311266  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:42.806740  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:44.809136  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:43.357769  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:43.373119  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:43.373186  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:43.409160  360776 cri.go:89] found id: ""
	I0229 02:18:43.409181  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.409189  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:43.409195  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:43.409238  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:43.447193  360776 cri.go:89] found id: ""
	I0229 02:18:43.447222  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.447231  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:43.447237  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:43.447296  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:43.487906  360776 cri.go:89] found id: ""
	I0229 02:18:43.487934  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.487942  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:43.487949  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:43.488008  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:43.527968  360776 cri.go:89] found id: ""
	I0229 02:18:43.528002  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.528016  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:43.528024  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:43.528100  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:43.573298  360776 cri.go:89] found id: ""
	I0229 02:18:43.573333  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.573344  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:43.573351  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:43.573461  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:43.630816  360776 cri.go:89] found id: ""
	I0229 02:18:43.630856  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.630867  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:43.630881  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:43.630954  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:43.701516  360776 cri.go:89] found id: ""
	I0229 02:18:43.701547  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.701559  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:43.701567  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:43.701636  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:43.747444  360776 cri.go:89] found id: ""
	I0229 02:18:43.747474  360776 logs.go:276] 0 containers: []
	W0229 02:18:43.747484  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:43.747494  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:43.747510  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:43.828216  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:43.828246  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:43.828270  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:43.874647  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:43.874684  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:43.937776  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:43.937808  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:43.989210  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:43.989250  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.506056  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:46.519717  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:18:46.519784  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:18:46.585095  360776 cri.go:89] found id: ""
	I0229 02:18:46.585128  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.585141  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:18:46.585149  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:18:46.585212  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:18:46.638520  360776 cri.go:89] found id: ""
	I0229 02:18:46.638553  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.638565  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:18:46.638572  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:18:46.638637  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:18:46.691413  360776 cri.go:89] found id: ""
	I0229 02:18:46.691446  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.691458  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:18:46.691466  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:18:46.691532  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:18:46.735054  360776 cri.go:89] found id: ""
	I0229 02:18:46.735083  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.735092  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:18:46.735098  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:18:46.735159  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:18:46.772486  360776 cri.go:89] found id: ""
	I0229 02:18:46.772531  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.772543  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:18:46.772551  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:18:46.772610  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:18:46.815466  360776 cri.go:89] found id: ""
	I0229 02:18:46.815491  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.815499  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:18:46.815505  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:18:46.815553  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:18:46.853168  360776 cri.go:89] found id: ""
	I0229 02:18:46.853199  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.853212  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:18:46.853220  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:18:46.853299  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:18:46.894320  360776 cri.go:89] found id: ""
	I0229 02:18:46.894353  360776 logs.go:276] 0 containers: []
	W0229 02:18:46.894365  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:18:46.894378  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:18:46.894394  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:18:46.944593  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:18:46.944631  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:18:46.960405  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:18:46.960433  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:18:47.029929  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:18:47.029960  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:18:47.029977  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:18:47.065292  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:18:47.065327  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:18:47.308699  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:49.808633  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:49.620521  360776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:18:49.636247  360776 kubeadm.go:640] restartCluster took 4m12.880265518s
	W0229 02:18:49.636335  360776 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:18:49.636372  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:18:50.114412  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:50.130257  360776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:18:50.141556  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:18:50.152882  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:18:50.152929  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:18:50.213815  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:18:50.213922  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:18:50.341927  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:18:50.342103  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:18:50.342249  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:18:50.577201  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:18:50.578563  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:18:50.587158  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:18:50.712207  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:18:50.714032  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:18:50.714149  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:18:50.716103  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:18:50.717503  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:18:50.718203  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:18:50.719194  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:18:50.719913  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:18:50.721364  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:18:50.722412  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:18:50.723087  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:18:50.723663  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:18:50.723813  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:18:50.724029  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:18:51.003432  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:18:51.145978  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:18:51.230808  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:18:51.340889  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:18:51.341726  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:18:51.343443  360776 out.go:204]   - Booting up control plane ...
	I0229 02:18:51.343564  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:18:51.347723  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:18:51.348592  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:18:51.349514  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:18:51.352720  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:18:52.307313  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:54.806310  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:56.806412  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:18:58.806973  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:01.306043  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:03.308131  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:05.308210  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:07.807594  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:09.812481  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:12.308103  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:14.310513  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:16.806841  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:18.807740  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:21.306666  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:23.307064  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:25.806451  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:27.806822  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:29.807253  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:31.352923  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:19:31.353370  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:31.353570  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:32.307377  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:34.309850  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:36.354842  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:36.355179  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:36.806074  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:38.807249  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:41.306690  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:43.308582  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:46.356431  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:19:46.356735  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:19:45.309102  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:47.808426  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:50.306270  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:52.307628  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:54.806254  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:19:56.800277  361093 pod_ready.go:81] duration metric: took 4m0.000614636s waiting for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" ...
	E0229 02:19:56.800308  361093 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9sdkl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:19:56.800332  361093 pod_ready.go:38] duration metric: took 4m14.556158159s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:56.800367  361093 kubeadm.go:640] restartCluster took 4m32.656788973s
	W0229 02:19:56.800444  361093 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:19:56.800489  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:20:01.980143  361093 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (5.179624969s)
	I0229 02:20:01.980234  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:01.996633  361093 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:20:02.007422  361093 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:20:02.017783  361093 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:20:02.017835  361093 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:20:02.234279  361093 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:20:06.357825  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:06.358110  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:10.891699  361093 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:20:10.891827  361093 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:20:10.891929  361093 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:20:10.892046  361093 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:20:10.892166  361093 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:20:10.892275  361093 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:20:10.893594  361093 out.go:204]   - Generating certificates and keys ...
	I0229 02:20:10.893681  361093 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:20:10.893781  361093 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:20:10.893878  361093 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:20:10.893977  361093 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:20:10.894061  361093 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:20:10.894150  361093 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:20:10.894255  361093 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:20:10.894353  361093 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:20:10.894466  361093 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:20:10.894563  361093 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:20:10.894619  361093 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:20:10.894689  361093 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:20:10.894754  361093 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:20:10.894831  361093 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:20:10.894919  361093 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:20:10.895000  361093 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:20:10.895120  361093 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:20:10.895214  361093 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:20:10.897074  361093 out.go:204]   - Booting up control plane ...
	I0229 02:20:10.897177  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:20:10.897301  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:20:10.897401  361093 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:20:10.897546  361093 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:20:10.897655  361093 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:20:10.897730  361093 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:20:10.897955  361093 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:20:10.898072  361093 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.003481 seconds
	I0229 02:20:10.898235  361093 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:20:10.898362  361093 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:20:10.898450  361093 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:20:10.898685  361093 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-665766 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:20:10.898770  361093 kubeadm.go:322] [bootstrap-token] Using token: 269xha.46kssuu5kaip43vm
	I0229 02:20:10.899874  361093 out.go:204]   - Configuring RBAC rules ...
	I0229 02:20:10.899970  361093 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:20:10.900078  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:20:10.900198  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:20:10.900334  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:20:10.900513  361093 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:20:10.900636  361093 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:20:10.900771  361093 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:20:10.900814  361093 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:20:10.900864  361093 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:20:10.900874  361093 kubeadm.go:322] 
	I0229 02:20:10.900929  361093 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:20:10.900935  361093 kubeadm.go:322] 
	I0229 02:20:10.901047  361093 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:20:10.901067  361093 kubeadm.go:322] 
	I0229 02:20:10.901106  361093 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:20:10.901184  361093 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:20:10.901249  361093 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:20:10.901259  361093 kubeadm.go:322] 
	I0229 02:20:10.901323  361093 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:20:10.901335  361093 kubeadm.go:322] 
	I0229 02:20:10.901410  361093 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:20:10.901421  361093 kubeadm.go:322] 
	I0229 02:20:10.901485  361093 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:20:10.901585  361093 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:20:10.901691  361093 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:20:10.901702  361093 kubeadm.go:322] 
	I0229 02:20:10.901773  361093 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:20:10.901869  361093 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:20:10.901881  361093 kubeadm.go:322] 
	I0229 02:20:10.901991  361093 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 269xha.46kssuu5kaip43vm \
	I0229 02:20:10.902122  361093 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 \
	I0229 02:20:10.902144  361093 kubeadm.go:322] 	--control-plane 
	I0229 02:20:10.902149  361093 kubeadm.go:322] 
	I0229 02:20:10.902254  361093 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:20:10.902273  361093 kubeadm.go:322] 
	I0229 02:20:10.902377  361093 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 269xha.46kssuu5kaip43vm \
	I0229 02:20:10.902520  361093 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:99ebea2fb56f25d35c82503561681d8a6f4727046bb097effad0d0203de43e75 
	I0229 02:20:10.902534  361093 cni.go:84] Creating CNI manager for ""
	I0229 02:20:10.902541  361093 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 02:20:10.904582  361093 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:20:10.905676  361093 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:20:10.930137  361093 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:20:10.979891  361093 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:20:10.980027  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=embed-certs-665766 minikube.k8s.io/updated_at=2024_02_29T02_20_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:10.980030  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:11.079204  361093 ops.go:34] apiserver oom_adj: -16
	I0229 02:20:11.314252  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:11.814676  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:12.315103  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:12.814906  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:13.314822  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:13.814328  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:14.314397  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:14.814464  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:15.315077  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:15.814758  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:16.314975  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:16.815307  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:17.315305  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:17.814371  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:18.315148  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:18.814336  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:19.314531  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:19.814983  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:20.314365  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:20.815167  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:21.314560  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:21.814519  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:22.315326  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:22.814733  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:23.315210  361093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:20:23.460714  361093 kubeadm.go:1088] duration metric: took 12.480754596s to wait for elevateKubeSystemPrivileges.
	I0229 02:20:23.460760  361093 kubeadm.go:406] StartCluster complete in 4m59.384955855s
	I0229 02:20:23.460835  361093 settings.go:142] acquiring lock: {Name:mkf6d985c87ae1ba2300543c86d438bf48134dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:20:23.460963  361093 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:20:23.462373  361093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/kubeconfig: {Name:mkd85f0f36cbc770f723a754929a01738907b7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:20:23.462619  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:20:23.462712  361093 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:20:23.462806  361093 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-665766"
	I0229 02:20:23.462833  361093 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-665766"
	I0229 02:20:23.462842  361093 addons.go:69] Setting dashboard=true in profile "embed-certs-665766"
	W0229 02:20:23.462848  361093 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:20:23.462878  361093 addons.go:234] Setting addon dashboard=true in "embed-certs-665766"
	W0229 02:20:23.462887  361093 addons.go:243] addon dashboard should already be in state true
	I0229 02:20:23.462885  361093 config.go:182] Loaded profile config "embed-certs-665766": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:20:23.462865  361093 addons.go:69] Setting metrics-server=true in profile "embed-certs-665766"
	I0229 02:20:23.462912  361093 addons.go:234] Setting addon metrics-server=true in "embed-certs-665766"
	I0229 02:20:23.462837  361093 addons.go:69] Setting default-storageclass=true in profile "embed-certs-665766"
	I0229 02:20:23.462940  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	W0229 02:20:23.462921  361093 addons.go:243] addon metrics-server should already be in state true
	I0229 02:20:23.462988  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.462939  361093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-665766"
	I0229 02:20:23.462940  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.463367  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463390  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463409  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463414  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463390  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463448  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.463573  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.463594  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.484706  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0229 02:20:23.484734  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0229 02:20:23.484744  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0229 02:20:23.484867  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0229 02:20:23.485323  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485340  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485376  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485416  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.485852  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485859  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485870  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.485878  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.485875  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.485887  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.486261  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486314  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486428  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.486441  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.486554  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.486728  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.486962  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.487011  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.487123  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.487168  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.487916  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.488429  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.488468  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.490061  361093 addons.go:234] Setting addon default-storageclass=true in "embed-certs-665766"
	W0229 02:20:23.490105  361093 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:20:23.490135  361093 host.go:66] Checking if "embed-certs-665766" exists ...
	I0229 02:20:23.490519  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.490554  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.505714  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I0229 02:20:23.506382  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.506952  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0229 02:20:23.507108  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.507125  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.507297  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.507838  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.508574  361093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 02:20:23.508601  361093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:23.508856  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0229 02:20:23.509055  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I0229 02:20:23.509239  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.509409  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.509420  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.509928  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.509971  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.510020  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.510043  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.510427  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.510446  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.510456  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.510457  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.510836  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.510844  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.511614  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.512674  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.512911  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.514837  361093 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:20:23.516144  361093 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:20:23.513612  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.518587  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:20:23.518631  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:20:23.519750  361093 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 02:20:23.520898  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 02:20:23.520912  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 02:20:23.520925  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.519796  361093 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:20:23.519826  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.522245  361093 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:20:23.522263  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:20:23.522279  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.525267  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.525478  361093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0229 02:20:23.525918  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.525942  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526065  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.526171  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526249  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.526364  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.526620  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.526677  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.526706  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526865  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.526876  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.526891  361093 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:23.527094  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.527286  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.527370  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.527392  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.527414  361093 main.go:141] libmachine: Using API Version  1
	I0229 02:20:23.527426  361093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:23.527431  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.527440  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.527600  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.527770  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.527837  361093 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:23.527921  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.528137  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetState
	I0229 02:20:23.529551  361093 main.go:141] libmachine: (embed-certs-665766) Calling .DriverName
	I0229 02:20:23.529764  361093 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:20:23.529779  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:20:23.529795  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHHostname
	I0229 02:20:23.532530  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.532935  361093 main.go:141] libmachine: (embed-certs-665766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:ed:e3", ip: ""} in network mk-embed-certs-665766: {Iface:virbr4 ExpiryTime:2024-02-29 03:15:12 +0000 UTC Type:0 Mac:52:54:00:0f:ed:e3 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:embed-certs-665766 Clientid:01:52:54:00:0f:ed:e3}
	I0229 02:20:23.532987  361093 main.go:141] libmachine: (embed-certs-665766) DBG | domain embed-certs-665766 has defined IP address 192.168.39.252 and MAC address 52:54:00:0f:ed:e3 in network mk-embed-certs-665766
	I0229 02:20:23.533201  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHPort
	I0229 02:20:23.533347  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHKeyPath
	I0229 02:20:23.533475  361093 main.go:141] libmachine: (embed-certs-665766) Calling .GetSSHUsername
	I0229 02:20:23.533597  361093 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/embed-certs-665766/id_rsa Username:docker}
	I0229 02:20:23.717181  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:20:23.718730  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:20:23.718746  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:20:23.751609  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 02:20:23.751628  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 02:20:23.774666  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:20:23.783425  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:20:23.783444  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:20:23.799321  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:20:23.843414  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 02:20:23.843438  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 02:20:23.857004  361093 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:20:23.857027  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:20:23.930205  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 02:20:23.930233  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 02:20:23.943684  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:20:23.970259  361093 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-665766" context rescaled to 1 replicas
	I0229 02:20:23.970298  361093 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 02:20:23.972009  361093 out.go:177] * Verifying Kubernetes components...
	I0229 02:20:23.973240  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:24.061065  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 02:20:24.061103  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 02:20:24.147407  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 02:20:24.147441  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 02:20:24.204201  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 02:20:24.204236  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 02:20:24.243191  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 02:20:24.243237  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 02:20:24.263274  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 02:20:24.263299  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 02:20:24.283356  361093 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:20:24.283374  361093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 02:20:24.303371  361093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 02:20:25.432821  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.715600333s)
	I0229 02:20:25.432877  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.432884  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.433179  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.433198  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.433214  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.433223  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.433233  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:25.433477  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.433499  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.433519  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:25.441485  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:25.441506  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:25.441772  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:25.441788  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:25.803307  361093 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.028599375s)
	I0229 02:20:25.803341  361093 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 02:20:26.329323  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.529964751s)
	I0229 02:20:26.329380  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.329389  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.329754  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.329817  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.329838  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.329836  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:26.329847  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.330130  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.330149  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.330176  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:26.411660  361093 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.438378455s)
	I0229 02:20:26.411727  361093 node_ready.go:35] waiting up to 6m0s for node "embed-certs-665766" to be "Ready" ...
	I0229 02:20:26.411785  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.468059693s)
	I0229 02:20:26.411846  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.411904  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.412327  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.412378  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.412400  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:26.412418  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:26.412733  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:26.412759  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:26.412778  361093 addons.go:470] Verifying addon metrics-server=true in "embed-certs-665766"
	I0229 02:20:26.429799  361093 node_ready.go:49] node "embed-certs-665766" has status "Ready":"True"
	I0229 02:20:26.429834  361093 node_ready.go:38] duration metric: took 18.091958ms waiting for node "embed-certs-665766" to be "Ready" ...
	I0229 02:20:26.429848  361093 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:20:26.443918  361093 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.453871  361093 pod_ready.go:92] pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.453893  361093 pod_ready.go:81] duration metric: took 9.938572ms waiting for pod "coredns-5dd5756b68-pf9x9" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.453902  361093 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.459920  361093 pod_ready.go:92] pod "etcd-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.459946  361093 pod_ready.go:81] duration metric: took 6.037204ms waiting for pod "etcd-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.459959  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.465595  361093 pod_ready.go:92] pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.465611  361093 pod_ready.go:81] duration metric: took 5.645555ms waiting for pod "kube-apiserver-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.465620  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.470943  361093 pod_ready.go:92] pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.470960  361093 pod_ready.go:81] duration metric: took 5.334268ms waiting for pod "kube-controller-manager-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.470968  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gtjq6" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.815785  361093 pod_ready.go:92] pod "kube-proxy-gtjq6" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:26.815809  361093 pod_ready.go:81] duration metric: took 344.835753ms waiting for pod "kube-proxy-gtjq6" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:26.815820  361093 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:27.179678  361093 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.87625995s)
	I0229 02:20:27.179741  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:27.179758  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:27.180115  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:27.180169  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:27.180191  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:27.180201  361093 main.go:141] libmachine: Making call to close driver server
	I0229 02:20:27.180212  361093 main.go:141] libmachine: (embed-certs-665766) Calling .Close
	I0229 02:20:27.180476  361093 main.go:141] libmachine: (embed-certs-665766) DBG | Closing plugin on server side
	I0229 02:20:27.180521  361093 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:20:27.180534  361093 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:20:27.182123  361093 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-665766 addons enable metrics-server
	
	I0229 02:20:27.183370  361093 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0229 02:20:27.184639  361093 addons.go:505] enable addons completed in 3.721930887s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0229 02:20:27.223120  361093 pod_ready.go:92] pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace has status "Ready":"True"
	I0229 02:20:27.223149  361093 pod_ready.go:81] duration metric: took 407.321396ms waiting for pod "kube-scheduler-embed-certs-665766" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:27.223163  361093 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace to be "Ready" ...
	I0229 02:20:29.231076  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:31.729827  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:33.745431  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:36.231699  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:38.238868  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:40.733145  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:43.231183  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:46.359040  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:20:46.359315  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:20:46.359346  360776 kubeadm.go:322] 
	I0229 02:20:46.359398  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:20:46.359458  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:20:46.359467  360776 kubeadm.go:322] 
	I0229 02:20:46.359511  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:20:46.359565  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:20:46.359711  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:20:46.359720  360776 kubeadm.go:322] 
	I0229 02:20:46.359823  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:20:46.359867  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:20:46.359894  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:20:46.359900  360776 kubeadm.go:322] 
	I0229 02:20:46.360005  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:20:46.360128  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:20:46.360236  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:20:46.360310  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:20:46.360381  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:20:46.360410  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:20:46.361502  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:20:46.361603  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:20:46.361688  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:20:46.361890  360776 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:20:46.361946  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 02:20:46.833083  360776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:20:46.850670  360776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:20:46.863291  360776 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:20:46.863352  360776 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:20:46.929466  360776 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:20:46.929532  360776 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:20:47.064941  360776 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:20:47.065277  360776 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:20:47.065515  360776 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:20:47.284721  360776 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:20:47.285859  360776 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:20:47.295028  360776 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:20:47.429614  360776 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:20:47.431229  360776 out.go:204]   - Generating certificates and keys ...
	I0229 02:20:47.431315  360776 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:20:47.431389  360776 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:20:47.431487  360776 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:20:47.431603  360776 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:20:47.431719  360776 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:20:47.431796  360776 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:20:47.431890  360776 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:20:47.431974  360776 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:20:47.432093  360776 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:20:47.432212  360776 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:20:47.432275  360776 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:20:47.432366  360776 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:20:47.946255  360776 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:20:48.258186  360776 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:20:48.398982  360776 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:20:48.545961  360776 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:20:48.546829  360776 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:20:45.234594  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:47.731325  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:49.731500  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:48.548500  360776 out.go:204]   - Booting up control plane ...
	I0229 02:20:48.548614  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:20:48.552604  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:20:48.553548  360776 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:20:48.554256  360776 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:20:48.558508  360776 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:20:52.231128  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:54.231680  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:56.730802  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:20:58.731112  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:01.232479  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:03.234385  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:05.730268  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:08.231970  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:10.233205  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:12.734859  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:15.230796  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:17.231363  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:19.231526  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:21.731071  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:23.732749  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:26.230929  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:28.731131  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:28.560199  360776 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:21:28.560645  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:28.560944  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:31.231022  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:33.731025  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:33.561853  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:33.562057  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:35.731752  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:38.229754  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:40.229986  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:42.730384  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:44.730788  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:43.562844  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:21:43.563063  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:21:46.731643  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:49.232075  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:51.729864  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:53.730399  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:55.730728  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:21:57.732563  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:00.232769  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:02.233327  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:04.730582  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:03.563980  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:03.564274  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:06.730978  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:08.731753  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:10.733273  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:13.230888  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:15.231384  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:17.233309  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:19.736876  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:22.231745  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:24.730148  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:26.730332  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:28.731241  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:31.232262  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:33.729969  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:36.230298  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:38.232199  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:43.566143  360776 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:22:43.566419  360776 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:22:43.566432  360776 kubeadm.go:322] 
	I0229 02:22:43.566494  360776 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:22:43.566562  360776 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:22:43.566573  360776 kubeadm.go:322] 
	I0229 02:22:43.566621  360776 kubeadm.go:322] This error is likely caused by:
	I0229 02:22:43.566669  360776 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:22:43.566789  360776 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:22:43.566798  360776 kubeadm.go:322] 
	I0229 02:22:43.566954  360776 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:22:43.567000  360776 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:22:43.567049  360776 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:22:43.567060  360776 kubeadm.go:322] 
	I0229 02:22:43.567282  360776 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:22:43.567417  360776 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:22:43.567521  360776 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:22:43.567592  360776 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:22:43.567684  360776 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:22:43.567736  360776 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:22:43.568136  360776 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:22:43.568244  360776 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:22:43.568368  360776 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:22:43.568439  360776 kubeadm.go:406] StartCluster complete in 8m6.863500244s
	I0229 02:22:43.568498  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:22:43.568644  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:22:43.619887  360776 cri.go:89] found id: ""
	I0229 02:22:43.619917  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.619926  360776 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:22:43.619932  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:22:43.619996  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:22:43.658073  360776 cri.go:89] found id: ""
	I0229 02:22:43.658110  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.658120  360776 logs.go:278] No container was found matching "etcd"
	I0229 02:22:43.658127  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:22:43.658197  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:22:43.697445  360776 cri.go:89] found id: ""
	I0229 02:22:43.697476  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.697489  360776 logs.go:278] No container was found matching "coredns"
	I0229 02:22:43.697495  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:22:43.697561  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:22:43.736241  360776 cri.go:89] found id: ""
	I0229 02:22:43.736270  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.736278  360776 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:22:43.736285  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:22:43.736345  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:22:43.775185  360776 cri.go:89] found id: ""
	I0229 02:22:43.775212  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.775221  360776 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:22:43.775227  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:22:43.775292  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:22:43.815309  360776 cri.go:89] found id: ""
	I0229 02:22:43.815338  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.815347  360776 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:22:43.815353  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:22:43.815436  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:22:43.860248  360776 cri.go:89] found id: ""
	I0229 02:22:43.860284  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.860296  360776 logs.go:278] No container was found matching "kindnet"
	I0229 02:22:43.860305  360776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:22:43.860375  360776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:22:43.918615  360776 cri.go:89] found id: ""
	I0229 02:22:43.918644  360776 logs.go:276] 0 containers: []
	W0229 02:22:43.918656  360776 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:22:43.918671  360776 logs.go:123] Gathering logs for kubelet ...
	I0229 02:22:43.918687  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:22:43.966006  360776 logs.go:123] Gathering logs for dmesg ...
	I0229 02:22:43.966045  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:22:43.981843  360776 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:22:43.981875  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:22:44.056838  360776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:22:44.056870  360776 logs.go:123] Gathering logs for containerd ...
	I0229 02:22:44.056887  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:22:44.090353  360776 logs.go:123] Gathering logs for container status ...
	I0229 02:22:44.090384  360776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:22:44.143169  360776 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:22:44.143235  360776 out.go:239] * 
	W0229 02:22:44.143336  360776 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.143366  360776 out.go:239] * 
	W0229 02:22:44.144361  360776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:22:44.147267  360776 out.go:177] 
	W0229 02:22:44.148417  360776 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:22:44.148458  360776 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:22:44.148476  360776 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:22:44.149710  360776 out.go:177] 
	I0229 02:22:40.731211  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:43.230524  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:45.232018  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:47.731166  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:50.231074  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:52.731967  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:54.732431  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:57.230523  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:22:59.230839  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:01.231188  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:03.730692  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:05.731139  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:08.229972  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:10.230875  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:12.731348  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:15.233235  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:17.730643  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:20.232963  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:22.730485  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:24.730676  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:26.731120  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:29.230981  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:31.730910  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:34.231238  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:36.232335  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:38.731165  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:40.731274  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:43.232341  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:45.731736  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:48.230390  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:50.740709  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:53.230645  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:55.730726  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:57.730949  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:23:59.732968  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:02.230504  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:04.732474  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:07.230833  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:09.730847  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:11.730927  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:14.231274  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:16.729839  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:18.731051  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:21.231048  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:23.731084  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:26.229186  361093 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace has status "Ready":"False"
	I0229 02:24:27.229797  361093 pod_ready.go:81] duration metric: took 4m0.006619539s waiting for pod "metrics-server-57f55c9bc5-kdvvw" in "kube-system" namespace to be "Ready" ...
	E0229 02:24:27.229822  361093 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:24:27.229831  361093 pod_ready.go:38] duration metric: took 4m0.799971766s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:24:27.229884  361093 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:24:27.229929  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:24:27.229995  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:24:27.291934  361093 cri.go:89] found id: "ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:27.291961  361093 cri.go:89] found id: ""
	I0229 02:24:27.291970  361093 logs.go:276] 1 containers: [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c]
	I0229 02:24:27.292035  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.297949  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:24:27.298016  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:24:27.339415  361093 cri.go:89] found id: "305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:27.339442  361093 cri.go:89] found id: ""
	I0229 02:24:27.339453  361093 logs.go:276] 1 containers: [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff]
	I0229 02:24:27.339507  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.345127  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:24:27.345177  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:24:27.387015  361093 cri.go:89] found id: "44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:27.387037  361093 cri.go:89] found id: ""
	I0229 02:24:27.387046  361093 logs.go:276] 1 containers: [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9]
	I0229 02:24:27.387102  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.393582  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:24:27.393642  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:24:27.433094  361093 cri.go:89] found id: "a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:27.433119  361093 cri.go:89] found id: ""
	I0229 02:24:27.433128  361093 logs.go:276] 1 containers: [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6]
	I0229 02:24:27.433192  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.438777  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:24:27.438849  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:24:27.483522  361093 cri.go:89] found id: "22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:27.483549  361093 cri.go:89] found id: ""
	I0229 02:24:27.483558  361093 logs.go:276] 1 containers: [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d]
	I0229 02:24:27.483617  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.490176  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:24:27.490243  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:24:27.532469  361093 cri.go:89] found id: "fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:27.532487  361093 cri.go:89] found id: ""
	I0229 02:24:27.532494  361093 logs.go:276] 1 containers: [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1]
	I0229 02:24:27.532538  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.537281  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:24:27.537340  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:24:27.576126  361093 cri.go:89] found id: ""
	I0229 02:24:27.576148  361093 logs.go:276] 0 containers: []
	W0229 02:24:27.576159  361093 logs.go:278] No container was found matching "kindnet"
	I0229 02:24:27.576166  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:24:27.576217  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:24:27.615465  361093 cri.go:89] found id: "55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:27.615490  361093 cri.go:89] found id: ""
	I0229 02:24:27.615506  361093 logs.go:276] 1 containers: [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6]
	I0229 02:24:27.615564  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.620302  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:24:27.620360  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:24:27.659108  361093 cri.go:89] found id: "87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:27.659124  361093 cri.go:89] found id: ""
	I0229 02:24:27.659130  361093 logs.go:276] 1 containers: [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac]
	I0229 02:24:27.659172  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:27.664403  361093 logs.go:123] Gathering logs for kubelet ...
	I0229 02:24:27.664423  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:24:27.734792  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:27.734947  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:27.736060  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:27.736207  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:27.765922  361093 logs.go:123] Gathering logs for dmesg ...
	I0229 02:24:27.765938  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:24:27.785796  361093 logs.go:123] Gathering logs for kube-apiserver [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c] ...
	I0229 02:24:27.785813  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:27.842548  361093 logs.go:123] Gathering logs for etcd [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff] ...
	I0229 02:24:27.842571  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:27.894566  361093 logs.go:123] Gathering logs for kube-controller-manager [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1] ...
	I0229 02:24:27.894593  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:27.958511  361093 logs.go:123] Gathering logs for storage-provisioner [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6] ...
	I0229 02:24:27.958540  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:28.003113  361093 logs.go:123] Gathering logs for container status ...
	I0229 02:24:28.003143  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:24:28.071141  361093 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:24:28.071170  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:24:28.225631  361093 logs.go:123] Gathering logs for coredns [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9] ...
	I0229 02:24:28.225669  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:28.269384  361093 logs.go:123] Gathering logs for kube-scheduler [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6] ...
	I0229 02:24:28.269420  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:28.317580  361093 logs.go:123] Gathering logs for kube-proxy [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d] ...
	I0229 02:24:28.317613  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:28.367251  361093 logs.go:123] Gathering logs for kubernetes-dashboard [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac] ...
	I0229 02:24:28.367281  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:28.406902  361093 logs.go:123] Gathering logs for containerd ...
	I0229 02:24:28.406933  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:24:28.469427  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:28.469451  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:24:28.469508  361093 out.go:239] X Problems detected in kubelet:
	W0229 02:24:28.469521  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:28.469577  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:28.469591  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:28.469600  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:28.469607  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:28.469612  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:24:38.469939  361093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:24:38.486853  361093 api_server.go:72] duration metric: took 4m14.516525469s to wait for apiserver process to appear ...
	I0229 02:24:38.486879  361093 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:24:38.486925  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:24:38.486978  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:24:38.526577  361093 cri.go:89] found id: "ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:38.526602  361093 cri.go:89] found id: ""
	I0229 02:24:38.526610  361093 logs.go:276] 1 containers: [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c]
	I0229 02:24:38.526666  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.531782  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:24:38.531841  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:24:38.570180  361093 cri.go:89] found id: "305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:38.570201  361093 cri.go:89] found id: ""
	I0229 02:24:38.570208  361093 logs.go:276] 1 containers: [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff]
	I0229 02:24:38.570258  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.574922  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:24:38.574988  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:24:38.613064  361093 cri.go:89] found id: "44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:38.613080  361093 cri.go:89] found id: ""
	I0229 02:24:38.613086  361093 logs.go:276] 1 containers: [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9]
	I0229 02:24:38.613124  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.617452  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:24:38.617498  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:24:38.657879  361093 cri.go:89] found id: "a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:38.657904  361093 cri.go:89] found id: ""
	I0229 02:24:38.657913  361093 logs.go:276] 1 containers: [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6]
	I0229 02:24:38.657969  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.662995  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:24:38.663076  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:24:38.705399  361093 cri.go:89] found id: "22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:38.705429  361093 cri.go:89] found id: ""
	I0229 02:24:38.705439  361093 logs.go:276] 1 containers: [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d]
	I0229 02:24:38.705501  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.710316  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:24:38.710378  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:24:38.750644  361093 cri.go:89] found id: "fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:38.750671  361093 cri.go:89] found id: ""
	I0229 02:24:38.750681  361093 logs.go:276] 1 containers: [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1]
	I0229 02:24:38.750737  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.755297  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:24:38.755352  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:24:38.793540  361093 cri.go:89] found id: ""
	I0229 02:24:38.793557  361093 logs.go:276] 0 containers: []
	W0229 02:24:38.793564  361093 logs.go:278] No container was found matching "kindnet"
	I0229 02:24:38.793570  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:24:38.793610  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:24:38.831104  361093 cri.go:89] found id: "87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:38.831119  361093 cri.go:89] found id: ""
	I0229 02:24:38.831125  361093 logs.go:276] 1 containers: [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac]
	I0229 02:24:38.831160  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.835275  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:24:38.835323  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:24:38.873475  361093 cri.go:89] found id: "55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:38.873493  361093 cri.go:89] found id: ""
	I0229 02:24:38.873500  361093 logs.go:276] 1 containers: [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6]
	I0229 02:24:38.873540  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:38.878368  361093 logs.go:123] Gathering logs for kube-scheduler [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6] ...
	I0229 02:24:38.878390  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:38.923522  361093 logs.go:123] Gathering logs for kube-proxy [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d] ...
	I0229 02:24:38.923548  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:38.964435  361093 logs.go:123] Gathering logs for container status ...
	I0229 02:24:38.964458  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:24:39.005620  361093 logs.go:123] Gathering logs for kubelet ...
	I0229 02:24:39.005651  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:24:39.073045  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.073209  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.074336  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.074496  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:39.110446  361093 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:24:39.110478  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:24:39.232166  361093 logs.go:123] Gathering logs for kube-apiserver [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c] ...
	I0229 02:24:39.232198  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:39.280691  361093 logs.go:123] Gathering logs for etcd [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff] ...
	I0229 02:24:39.280722  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:39.333042  361093 logs.go:123] Gathering logs for storage-provisioner [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6] ...
	I0229 02:24:39.333075  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:39.376476  361093 logs.go:123] Gathering logs for containerd ...
	I0229 02:24:39.376511  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:24:39.460706  361093 logs.go:123] Gathering logs for dmesg ...
	I0229 02:24:39.460753  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:24:39.478278  361093 logs.go:123] Gathering logs for coredns [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9] ...
	I0229 02:24:39.478312  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:39.520503  361093 logs.go:123] Gathering logs for kube-controller-manager [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1] ...
	I0229 02:24:39.520540  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:39.585358  361093 logs.go:123] Gathering logs for kubernetes-dashboard [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac] ...
	I0229 02:24:39.585398  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:39.626645  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:39.626675  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:24:39.626752  361093 out.go:239] X Problems detected in kubelet:
	W0229 02:24:39.626765  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.626773  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.626785  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:39.626799  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:39.626808  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:39.626816  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:24:49.628247  361093 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0229 02:24:49.633437  361093 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0229 02:24:49.634869  361093 api_server.go:141] control plane version: v1.28.4
	I0229 02:24:49.634888  361093 api_server.go:131] duration metric: took 11.148001248s to wait for apiserver health ...
	I0229 02:24:49.634899  361093 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:24:49.634928  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:24:49.634996  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:24:49.677174  361093 cri.go:89] found id: "ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:49.677204  361093 cri.go:89] found id: ""
	I0229 02:24:49.677214  361093 logs.go:276] 1 containers: [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c]
	I0229 02:24:49.677292  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.682331  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 02:24:49.682397  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:24:49.722340  361093 cri.go:89] found id: "305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:49.722363  361093 cri.go:89] found id: ""
	I0229 02:24:49.722370  361093 logs.go:276] 1 containers: [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff]
	I0229 02:24:49.722429  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.727151  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 02:24:49.727206  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:24:49.771669  361093 cri.go:89] found id: "44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:49.771693  361093 cri.go:89] found id: ""
	I0229 02:24:49.771700  361093 logs.go:276] 1 containers: [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9]
	I0229 02:24:49.771750  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.777043  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:24:49.777091  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:24:49.817045  361093 cri.go:89] found id: "a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:49.817071  361093 cri.go:89] found id: ""
	I0229 02:24:49.817081  361093 logs.go:276] 1 containers: [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6]
	I0229 02:24:49.817130  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.821786  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:24:49.821837  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:24:49.860078  361093 cri.go:89] found id: "22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:49.860110  361093 cri.go:89] found id: ""
	I0229 02:24:49.860119  361093 logs.go:276] 1 containers: [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d]
	I0229 02:24:49.860183  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.866369  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:24:49.866473  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:24:49.915578  361093 cri.go:89] found id: "fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:49.915607  361093 cri.go:89] found id: ""
	I0229 02:24:49.915615  361093 logs.go:276] 1 containers: [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1]
	I0229 02:24:49.915684  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:49.920846  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 02:24:49.920932  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:24:49.962645  361093 cri.go:89] found id: ""
	I0229 02:24:49.962671  361093 logs.go:276] 0 containers: []
	W0229 02:24:49.962680  361093 logs.go:278] No container was found matching "kindnet"
	I0229 02:24:49.962687  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:24:49.962740  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:24:50.011096  361093 cri.go:89] found id: "87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:50.011121  361093 cri.go:89] found id: ""
	I0229 02:24:50.011128  361093 logs.go:276] 1 containers: [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac]
	I0229 02:24:50.011178  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:50.016421  361093 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:24:50.016476  361093 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:24:50.063649  361093 cri.go:89] found id: "55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:50.063670  361093 cri.go:89] found id: ""
	I0229 02:24:50.063676  361093 logs.go:276] 1 containers: [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6]
	I0229 02:24:50.063733  361093 ssh_runner.go:195] Run: which crictl
	I0229 02:24:50.068841  361093 logs.go:123] Gathering logs for etcd [305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff] ...
	I0229 02:24:50.068860  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305afec51d956ad266a04132bd74138374be1aac1c9f76c9743b067f1eb338ff"
	I0229 02:24:50.125960  361093 logs.go:123] Gathering logs for coredns [44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9] ...
	I0229 02:24:50.125991  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44685132afea89185ffcb886e69ef264038d950cde72f2bccbe1714ffe6933f9"
	I0229 02:24:50.168727  361093 logs.go:123] Gathering logs for kube-controller-manager [fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1] ...
	I0229 02:24:50.168762  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fda4f7aa1be28cbaaaf7ebb11881da33db2e5c4d971d65331158f96266256cf1"
	I0229 02:24:50.240474  361093 logs.go:123] Gathering logs for kubernetes-dashboard [87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac] ...
	I0229 02:24:50.240509  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87380c345330760b0702052952314cc9fc149d5ec7c01a8b237931cd373804ac"
	I0229 02:24:50.284140  361093 logs.go:123] Gathering logs for kubelet ...
	I0229 02:24:50.284171  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:24:50.348949  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.349117  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.350594  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.350762  361093 logs.go:138] Found kubelet problem: Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:50.381167  361093 logs.go:123] Gathering logs for dmesg ...
	I0229 02:24:50.381209  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:24:50.397094  361093 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:24:50.397126  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:24:50.526336  361093 logs.go:123] Gathering logs for kube-apiserver [ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c] ...
	I0229 02:24:50.526374  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffe2867f408184f527822e4a2cddcfcfa4a8864f9998a3ee577ac492de112a8c"
	I0229 02:24:50.580463  361093 logs.go:123] Gathering logs for kube-scheduler [a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6] ...
	I0229 02:24:50.580495  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a050899dc459c0f956fc847db2f62c4392af771b5addaaecef80067ebd7cddd6"
	I0229 02:24:50.627952  361093 logs.go:123] Gathering logs for kube-proxy [22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d] ...
	I0229 02:24:50.627988  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22cc3882d1d0af1d1f8ad69f8282032dd8734f175d47efa5759d3e1a094d665d"
	I0229 02:24:50.671981  361093 logs.go:123] Gathering logs for storage-provisioner [55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6] ...
	I0229 02:24:50.672014  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55abb48944a1bc9c3edcba34b0a7c1405b357105dbcbb8ec4f3132980fc825d6"
	I0229 02:24:50.711025  361093 logs.go:123] Gathering logs for containerd ...
	I0229 02:24:50.711079  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 02:24:50.780064  361093 logs.go:123] Gathering logs for container status ...
	I0229 02:24:50.780110  361093 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:24:50.827300  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:50.827326  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:24:50.827392  361093 out.go:239] X Problems detected in kubelet:
	W0229 02:24:50.827407  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.366266    3679 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.827419  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.366395    3679 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.827432  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: W0229 02:20:23.644230    3679 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	W0229 02:24:50.827443  361093 out.go:239]   Feb 29 02:20:23 embed-certs-665766 kubelet[3679]: E0229 02:20:23.644260    3679 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-665766" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-665766' and this object
	I0229 02:24:50.827459  361093 out.go:304] Setting ErrFile to fd 2...
	I0229 02:24:50.827470  361093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:00.835010  361093 system_pods.go:59] 8 kube-system pods found
	I0229 02:25:00.835043  361093 system_pods.go:61] "coredns-5dd5756b68-pf9x9" [d22bf48c-c24a-4e0c-8b94-2269b2c1e45e] Running
	I0229 02:25:00.835048  361093 system_pods.go:61] "etcd-embed-certs-665766" [26a6156f-b3e4-4e05-862c-98c77e9ca852] Running
	I0229 02:25:00.835052  361093 system_pods.go:61] "kube-apiserver-embed-certs-665766" [d6b452c8-0a2c-4ba9-bebc-f04625dcfeef] Running
	I0229 02:25:00.835056  361093 system_pods.go:61] "kube-controller-manager-embed-certs-665766" [d2542a5c-ba48-4e5b-b832-f417b7b1f060] Running
	I0229 02:25:00.835059  361093 system_pods.go:61] "kube-proxy-gtjq6" [e0e66d49-0861-4546-8b3a-0ea3f2021769] Running
	I0229 02:25:00.835062  361093 system_pods.go:61] "kube-scheduler-embed-certs-665766" [4e8a17cb-507c-41e8-a326-d88d778f1eea] Running
	I0229 02:25:00.835069  361093 system_pods.go:61] "metrics-server-57f55c9bc5-kdvvw" [b70c8f8c-dd5b-4653-838d-3815d52cc0f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:25:00.835075  361093 system_pods.go:61] "storage-provisioner" [97993825-092f-4d18-aeeb-64fde6ba795e] Running
	I0229 02:25:00.835084  361093 system_pods.go:74] duration metric: took 11.200178346s to wait for pod list to return data ...
	I0229 02:25:00.835095  361093 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:25:00.837666  361093 default_sa.go:45] found service account: "default"
	I0229 02:25:00.837688  361093 default_sa.go:55] duration metric: took 2.584028ms for default service account to be created ...
	I0229 02:25:00.837699  361093 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:25:00.844008  361093 system_pods.go:86] 8 kube-system pods found
	I0229 02:25:00.844031  361093 system_pods.go:89] "coredns-5dd5756b68-pf9x9" [d22bf48c-c24a-4e0c-8b94-2269b2c1e45e] Running
	I0229 02:25:00.844038  361093 system_pods.go:89] "etcd-embed-certs-665766" [26a6156f-b3e4-4e05-862c-98c77e9ca852] Running
	I0229 02:25:00.844043  361093 system_pods.go:89] "kube-apiserver-embed-certs-665766" [d6b452c8-0a2c-4ba9-bebc-f04625dcfeef] Running
	I0229 02:25:00.844050  361093 system_pods.go:89] "kube-controller-manager-embed-certs-665766" [d2542a5c-ba48-4e5b-b832-f417b7b1f060] Running
	I0229 02:25:00.844055  361093 system_pods.go:89] "kube-proxy-gtjq6" [e0e66d49-0861-4546-8b3a-0ea3f2021769] Running
	I0229 02:25:00.844060  361093 system_pods.go:89] "kube-scheduler-embed-certs-665766" [4e8a17cb-507c-41e8-a326-d88d778f1eea] Running
	I0229 02:25:00.844069  361093 system_pods.go:89] "metrics-server-57f55c9bc5-kdvvw" [b70c8f8c-dd5b-4653-838d-3815d52cc0f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:25:00.844076  361093 system_pods.go:89] "storage-provisioner" [97993825-092f-4d18-aeeb-64fde6ba795e] Running
	I0229 02:25:00.844086  361093 system_pods.go:126] duration metric: took 6.380306ms to wait for k8s-apps to be running ...
	I0229 02:25:00.844095  361093 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:25:00.844144  361093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:25:00.862900  361093 system_svc.go:56] duration metric: took 18.796697ms WaitForService to wait for kubelet.
	I0229 02:25:00.862927  361093 kubeadm.go:581] duration metric: took 4m36.892603056s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:25:00.862952  361093 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:25:00.865826  361093 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:25:00.865846  361093 node_conditions.go:123] node cpu capacity is 2
	I0229 02:25:00.865899  361093 node_conditions.go:105] duration metric: took 2.937756ms to run NodePressure ...
	I0229 02:25:00.865915  361093 start.go:228] waiting for startup goroutines ...
	I0229 02:25:00.865931  361093 start.go:233] waiting for cluster config update ...
	I0229 02:25:00.865971  361093 start.go:242] writing updated cluster config ...
	I0229 02:25:00.866301  361093 ssh_runner.go:195] Run: rm -f paused
	I0229 02:25:00.917044  361093 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:25:00.920135  361093 out.go:177] * Done! kubectl is now configured to use "embed-certs-665766" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> containerd <==
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165624877Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165698335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165745697Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165787935Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165917244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.165968270Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166006973Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166044436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166543615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseR
untimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMiss
ingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166717042Z" level=info msg="Connect containerd service"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166807336Z" level=info msg="using legacy CRI server"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166857305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.166925237Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.168440964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169467518Z" level=info msg="Start subscribing containerd event"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169852898Z" level=info msg="Start recovering state"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.169759950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.170354766Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216434996Z" level=info msg="Start event monitor"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216570893Z" level=info msg="Start snapshots syncer"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216584766Z" level=info msg="Start cni network conf syncer for default"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216590881Z" level=info msg="Start streaming server"
	Feb 29 02:14:34 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:14:34.216768197Z" level=info msg="containerd successfully booted in 0.090655s"
	Feb 29 02:18:50 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:18:50.110070145Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/87-podman-bridge.conflist.mk_disabled\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 02:18:50 old-k8s-version-254968 containerd[614]: time="2024-02-29T02:18:50.110410570Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/.keep\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 02:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.634203] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.396865] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.706137] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.508894] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +0.058297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061765] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +0.223600] systemd-fstab-generator[503]: Ignoring "noauto" option for root device
	[  +0.145548] systemd-fstab-generator[515]: Ignoring "noauto" option for root device
	[  +0.315865] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +6.792896] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.059937] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.232197] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.066766] kauditd_printk_skb: 18 callbacks suppressed
	[Feb29 02:18] systemd-fstab-generator[7959]: Ignoring "noauto" option for root device
	[  +0.063045] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 02:20] systemd-fstab-generator[9666]: Ignoring "noauto" option for root device
	[  +0.073310] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:37:43 up 23 min,  0 users,  load average: 0.42, 0.15, 0.12
	Linux old-k8s-version-254968 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:37:41 old-k8s-version-254968 kubelet[24010]: F0229 02:37:41.720547   24010 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:37:41 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:37:41 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:37:42 old-k8s-version-254968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1353.
	Feb 29 02:37:42 old-k8s-version-254968 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:37:42 old-k8s-version-254968 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:37:42 old-k8s-version-254968 kubelet[24038]: I0229 02:37:42.466472   24038 server.go:410] Version: v1.16.0
	Feb 29 02:37:42 old-k8s-version-254968 kubelet[24038]: I0229 02:37:42.466746   24038 plugins.go:100] No cloud provider specified.
	Feb 29 02:37:42 old-k8s-version-254968 kubelet[24038]: I0229 02:37:42.466758   24038 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:37:42 old-k8s-version-254968 kubelet[24038]: I0229 02:37:42.469739   24038 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:37:42 old-k8s-version-254968 kubelet[24038]: W0229 02:37:42.471414   24038 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:37:42 old-k8s-version-254968 kubelet[24038]: F0229 02:37:42.471651   24038 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:37:42 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:37:42 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:37:43 old-k8s-version-254968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1354.
	Feb 29 02:37:43 old-k8s-version-254968 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:37:43 old-k8s-version-254968 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:37:43 old-k8s-version-254968 kubelet[24102]: I0229 02:37:43.203779   24102 server.go:410] Version: v1.16.0
	Feb 29 02:37:43 old-k8s-version-254968 kubelet[24102]: I0229 02:37:43.204028   24102 plugins.go:100] No cloud provider specified.
	Feb 29 02:37:43 old-k8s-version-254968 kubelet[24102]: I0229 02:37:43.204038   24102 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:37:43 old-k8s-version-254968 kubelet[24102]: I0229 02:37:43.206122   24102 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:37:43 old-k8s-version-254968 kubelet[24102]: W0229 02:37:43.206988   24102 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:37:43 old-k8s-version-254968 kubelet[24102]: F0229 02:37:43.207063   24102 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:37:43 old-k8s-version-254968 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:37:43 old-k8s-version-254968 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 2 (250.203234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-254968" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (354.98s)

                                                
                                    

Test pass (266/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 55.37
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 21.97
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 13.38
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 105.58
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 147.91
38 TestAddons/parallel/Registry 16.48
39 TestAddons/parallel/Ingress 23.22
40 TestAddons/parallel/InspektorGadget 11.12
41 TestAddons/parallel/MetricsServer 6.87
42 TestAddons/parallel/HelmTiller 14.37
44 TestAddons/parallel/CSI 84.95
45 TestAddons/parallel/Headlamp 13.87
46 TestAddons/parallel/CloudSpanner 5.87
47 TestAddons/parallel/LocalPath 55.71
48 TestAddons/parallel/NvidiaDevicePlugin 6.65
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 92.56
54 TestCertOptions 81.94
55 TestCertExpiration 266.8
57 TestForceSystemdFlag 80.35
58 TestForceSystemdEnv 78.4
60 TestKVMDriverInstallOrUpdate 4.45
64 TestErrorSpam/setup 43.32
65 TestErrorSpam/start 0.37
66 TestErrorSpam/status 0.76
67 TestErrorSpam/pause 1.59
68 TestErrorSpam/unpause 1.65
69 TestErrorSpam/stop 2.26
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 98.22
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.2
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
81 TestFunctional/serial/CacheCmd/cache/add_local 2.41
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 37.63
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.51
92 TestFunctional/serial/LogsFileCmd 1.46
93 TestFunctional/serial/InvalidService 4.53
95 TestFunctional/parallel/ConfigCmd 0.45
96 TestFunctional/parallel/DashboardCmd 16.06
97 TestFunctional/parallel/DryRun 0.3
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 0.86
103 TestFunctional/parallel/ServiceCmdConnect 21.48
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 29.52
107 TestFunctional/parallel/SSHCmd 0.38
108 TestFunctional/parallel/CpCmd 1.43
109 TestFunctional/parallel/MySQL 27.17
110 TestFunctional/parallel/FileSync 0.24
111 TestFunctional/parallel/CertSync 1.47
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
119 TestFunctional/parallel/License 0.64
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
127 TestFunctional/parallel/ImageCommands/ImageBuild 5
128 TestFunctional/parallel/ImageCommands/Setup 2.08
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.53
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.63
132 TestFunctional/parallel/ServiceCmd/DeployApp 21.22
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.88
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.29
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.17
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.7
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.28
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
140 TestFunctional/parallel/ProfileCmd/profile_list 0.26
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
142 TestFunctional/parallel/MountCmd/any-port 7.45
143 TestFunctional/parallel/ServiceCmd/List 0.46
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
146 TestFunctional/parallel/ServiceCmd/Format 0.57
147 TestFunctional/parallel/ServiceCmd/URL 0.41
157 TestFunctional/parallel/MountCmd/specific-port 1.52
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
172 TestJSONOutput/start/Command 100.15
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.74
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.66
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.21
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 95.1
204 TestMountStart/serial/StartWithMountFirst 32.64
205 TestMountStart/serial/VerifyMountFirst 0.4
206 TestMountStart/serial/StartWithMountSecond 31.53
207 TestMountStart/serial/VerifyMountSecond 0.39
208 TestMountStart/serial/DeleteFirst 0.88
209 TestMountStart/serial/VerifyMountPostDelete 0.39
210 TestMountStart/serial/Stop 1.23
211 TestMountStart/serial/RestartStopped 22.96
212 TestMountStart/serial/VerifyMountPostStop 0.39
215 TestMultiNode/serial/FreshStart2Nodes 185.08
216 TestMultiNode/serial/DeployApp2Nodes 5.71
217 TestMultiNode/serial/PingHostFrom2Pods 0.89
218 TestMultiNode/serial/AddNode 40.45
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.21
221 TestMultiNode/serial/CopyFile 7.49
222 TestMultiNode/serial/StopNode 2.29
223 TestMultiNode/serial/StartAfterStop 23.99
224 TestMultiNode/serial/RestartKeepsNodes 301.98
225 TestMultiNode/serial/DeleteNode 1.74
226 TestMultiNode/serial/StopMultiNode 183.74
227 TestMultiNode/serial/RestartMultiNode 88.22
228 TestMultiNode/serial/ValidateNameConflict 48.92
233 TestPreload 294.83
235 TestScheduledStopUnix 120.42
239 TestRunningBinaryUpgrade 233.39
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
245 TestNoKubernetes/serial/StartWithK8s 100.69
246 TestNoKubernetes/serial/StartWithStopK8s 56.29
247 TestNoKubernetes/serial/Start 36
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
249 TestNoKubernetes/serial/ProfileList 28.81
250 TestNoKubernetes/serial/Stop 2.68
251 TestNoKubernetes/serial/StartNoArgs 24.14
252 TestStoppedBinaryUpgrade/Setup 2.55
253 TestStoppedBinaryUpgrade/Upgrade 179.01
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
262 TestNetworkPlugins/group/false 3.53
274 TestPause/serial/Start 141.49
275 TestNetworkPlugins/group/auto/Start 99.39
276 TestPause/serial/SecondStartNoReconfiguration 7.19
277 TestPause/serial/Pause 0.99
278 TestPause/serial/VerifyStatus 0.25
279 TestPause/serial/Unpause 0.7
280 TestPause/serial/PauseAgain 0.81
281 TestPause/serial/DeletePaused 1.01
282 TestPause/serial/VerifyDeletedResources 0.42
283 TestNetworkPlugins/group/calico/Start 93.94
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
285 TestNetworkPlugins/group/custom-flannel/Start 99.73
286 TestNetworkPlugins/group/auto/KubeletFlags 0.22
287 TestNetworkPlugins/group/auto/NetCatPod 9.26
288 TestNetworkPlugins/group/auto/DNS 0.18
289 TestNetworkPlugins/group/auto/Localhost 0.17
290 TestNetworkPlugins/group/auto/HairPin 0.14
291 TestNetworkPlugins/group/kindnet/Start 69.17
292 TestNetworkPlugins/group/calico/ControllerPod 6.01
293 TestNetworkPlugins/group/calico/KubeletFlags 0.22
294 TestNetworkPlugins/group/calico/NetCatPod 9.29
295 TestNetworkPlugins/group/calico/DNS 0.17
296 TestNetworkPlugins/group/calico/Localhost 0.14
297 TestNetworkPlugins/group/calico/HairPin 0.13
298 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
299 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
300 TestNetworkPlugins/group/custom-flannel/DNS 0.25
301 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
302 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
303 TestNetworkPlugins/group/flannel/Start 93.94
304 TestNetworkPlugins/group/enable-default-cni/Start 132.5
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
307 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
308 TestNetworkPlugins/group/kindnet/DNS 0.26
309 TestNetworkPlugins/group/kindnet/Localhost 0.17
310 TestNetworkPlugins/group/kindnet/HairPin 0.18
311 TestNetworkPlugins/group/bridge/Start 112.13
314 TestNetworkPlugins/group/flannel/ControllerPod 6.01
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
316 TestNetworkPlugins/group/flannel/NetCatPod 12.44
317 TestNetworkPlugins/group/flannel/DNS 0.19
318 TestNetworkPlugins/group/flannel/Localhost 0.14
319 TestNetworkPlugins/group/flannel/HairPin 0.17
321 TestStartStop/group/no-preload/serial/FirstStart 139.71
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.4
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
330 TestNetworkPlugins/group/bridge/NetCatPod 9.26
331 TestNetworkPlugins/group/bridge/DNS 0.17
332 TestNetworkPlugins/group/bridge/Localhost 0.12
333 TestNetworkPlugins/group/bridge/HairPin 0.12
335 TestStartStop/group/newest-cni/serial/FirstStart 60.56
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
338 TestStartStop/group/newest-cni/serial/Stop 2.11
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/newest-cni/serial/SecondStart 42.94
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.3
342 TestStartStop/group/no-preload/serial/DeployApp 10.32
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.32
345 TestStartStop/group/no-preload/serial/Stop 91.83
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.26
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
350 TestStartStop/group/newest-cni/serial/Pause 2.52
352 TestStartStop/group/embed-certs/serial/FirstStart 101.32
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
356 TestStartStop/group/no-preload/serial/SecondStart 324.89
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 344.88
359 TestStartStop/group/embed-certs/serial/DeployApp 10.34
360 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
361 TestStartStop/group/embed-certs/serial/Stop 92.26
362 TestStartStop/group/old-k8s-version/serial/Stop 1.36
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
366 TestStartStop/group/embed-certs/serial/SecondStart 601.08
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
370 TestStartStop/group/no-preload/serial/Pause 3.14
371 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.81
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
378 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
379 TestStartStop/group/embed-certs/serial/Pause 2.69
x
+
TestDownloadOnly/v1.16.0/json-events (55.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-913940 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-913940 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (55.36712234s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (55.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-913940
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-913940: exit status 85 (74.332978ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-913940 | jenkins | v1.32.0 | 29 Feb 24 01:10 UTC |          |
	|         | -p download-only-913940        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:10:27
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:10:27.932070  316348 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:10:27.932208  316348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:10:27.932220  316348 out.go:304] Setting ErrFile to fd 2...
	I0229 01:10:27.932226  316348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:10:27.932966  316348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	W0229 01:10:27.933193  316348 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18063-309085/.minikube/config/config.json: open /home/jenkins/minikube-integration/18063-309085/.minikube/config/config.json: no such file or directory
	I0229 01:10:27.934184  316348 out.go:298] Setting JSON to true
	I0229 01:10:27.935123  316348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3172,"bootTime":1709165856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:10:27.935190  316348 start.go:139] virtualization: kvm guest
	I0229 01:10:27.937608  316348 out.go:97] [download-only-913940] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:10:27.938974  316348 out.go:169] MINIKUBE_LOCATION=18063
	I0229 01:10:27.937770  316348 notify.go:220] Checking for updates...
	W0229 01:10:27.937819  316348 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 01:10:27.941451  316348 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:10:27.942824  316348 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 01:10:27.944055  316348 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:10:27.945296  316348 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 01:10:27.947464  316348 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 01:10:27.947679  316348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:10:27.978839  316348 out.go:97] Using the kvm2 driver based on user configuration
	I0229 01:10:27.978873  316348 start.go:299] selected driver: kvm2
	I0229 01:10:27.978879  316348 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:10:27.979212  316348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:10:27.979283  316348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:10:27.994136  316348 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:10:27.994182  316348 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:10:27.994642  316348 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 01:10:27.994794  316348 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 01:10:27.994870  316348 cni.go:84] Creating CNI manager for ""
	I0229 01:10:27.994883  316348 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 01:10:27.994890  316348 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:10:27.994899  316348 start_flags.go:323] config:
	{Name:download-only-913940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-913940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:10:27.995087  316348 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:10:27.996737  316348 out.go:97] Downloading VM boot image ...
	I0229 01:10:27.996767  316348 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:10:37.679080  316348 out.go:97] Starting control plane node download-only-913940 in cluster download-only-913940
	I0229 01:10:37.679105  316348 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 01:10:37.796058  316348 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 01:10:37.796095  316348 cache.go:56] Caching tarball of preloaded images
	I0229 01:10:37.796298  316348 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 01:10:37.798411  316348 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 01:10:37.798437  316348 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:10:37.909063  316348 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 01:10:50.054942  316348 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:10:50.055056  316348 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:10:50.899379  316348 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 01:10:50.899823  316348 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/download-only-913940/config.json ...
	I0229 01:10:50.899858  316348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/download-only-913940/config.json: {Name:mk14e2bc00b7c6907a9a232751737e984a86cbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:10:50.900047  316348 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 01:10:50.900245  316348 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-913940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-913940
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (21.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-992225 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-992225 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (21.965293102s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (21.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-992225
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-992225: exit status 85 (72.656571ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-913940 | jenkins | v1.32.0 | 29 Feb 24 01:10 UTC |                     |
	|         | -p download-only-913940        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-913940        | download-only-913940 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only        | download-only-992225 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-992225        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:11:23
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:11:23.651843  316616 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:11:23.651942  316616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:23.651950  316616 out.go:304] Setting ErrFile to fd 2...
	I0229 01:11:23.651954  316616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:23.652154  316616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:11:23.652691  316616 out.go:298] Setting JSON to true
	I0229 01:11:23.653640  316616 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3228,"bootTime":1709165856,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:11:23.653708  316616 start.go:139] virtualization: kvm guest
	I0229 01:11:23.655677  316616 out.go:97] [download-only-992225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:11:23.657133  316616 out.go:169] MINIKUBE_LOCATION=18063
	I0229 01:11:23.655846  316616 notify.go:220] Checking for updates...
	I0229 01:11:23.659596  316616 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:11:23.660922  316616 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 01:11:23.662215  316616 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:11:23.663355  316616 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 01:11:23.665629  316616 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 01:11:23.665843  316616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:11:23.698823  316616 out.go:97] Using the kvm2 driver based on user configuration
	I0229 01:11:23.698859  316616 start.go:299] selected driver: kvm2
	I0229 01:11:23.698865  316616 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:11:23.699208  316616 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:23.699297  316616 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:11:23.714879  316616 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:11:23.714951  316616 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:11:23.715467  316616 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 01:11:23.715600  316616 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 01:11:23.715681  316616 cni.go:84] Creating CNI manager for ""
	I0229 01:11:23.715694  316616 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 01:11:23.715702  316616 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:11:23.715712  316616 start_flags.go:323] config:
	{Name:download-only-992225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-992225 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:11:23.715847  316616 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:23.717312  316616 out.go:97] Starting control plane node download-only-992225 in cluster download-only-992225
	I0229 01:11:23.717329  316616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 01:11:23.824132  316616 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 01:11:23.824169  316616 cache.go:56] Caching tarball of preloaded images
	I0229 01:11:23.824335  316616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 01:11:23.826089  316616 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 01:11:23.826101  316616 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:11:23.938052  316616 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 01:11:37.118741  316616 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:11:37.118865  316616 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:11:38.111159  316616 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 01:11:38.111542  316616 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/download-only-992225/config.json ...
	I0229 01:11:38.111579  316616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/download-only-992225/config.json: {Name:mkebf73dc62c00fc61e0ac78d25d58aac7e1ecce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:11:38.111791  316616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 01:11:38.112006  316616 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-992225"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-992225
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (13.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-501770 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-501770 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (13.382296682s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (13.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-501770
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-501770: exit status 85 (72.292151ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-913940 | jenkins | v1.32.0 | 29 Feb 24 01:10 UTC |                     |
	|         | -p download-only-913940           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-913940           | download-only-913940 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only           | download-only-992225 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-992225           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-992225           | download-only-992225 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only           | download-only-501770 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-501770           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:11:45
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:11:45.964230  316811 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:11:45.964365  316811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:45.964377  316811 out.go:304] Setting ErrFile to fd 2...
	I0229 01:11:45.964383  316811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:45.964602  316811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:11:45.965185  316811 out.go:298] Setting JSON to true
	I0229 01:11:45.966111  316811 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3250,"bootTime":1709165856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:11:45.966180  316811 start.go:139] virtualization: kvm guest
	I0229 01:11:45.968229  316811 out.go:97] [download-only-501770] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:11:45.969742  316811 out.go:169] MINIKUBE_LOCATION=18063
	I0229 01:11:45.968397  316811 notify.go:220] Checking for updates...
	I0229 01:11:45.972164  316811 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:11:45.973414  316811 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 01:11:45.974551  316811 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:11:45.975690  316811 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 01:11:45.977829  316811 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 01:11:45.978052  316811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:11:46.008112  316811 out.go:97] Using the kvm2 driver based on user configuration
	I0229 01:11:46.008140  316811 start.go:299] selected driver: kvm2
	I0229 01:11:46.008146  316811 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:11:46.008454  316811 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:46.008537  316811 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:11:46.022563  316811 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:11:46.022610  316811 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:11:46.023033  316811 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 01:11:46.023158  316811 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 01:11:46.023239  316811 cni.go:84] Creating CNI manager for ""
	I0229 01:11:46.023252  316811 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 01:11:46.023263  316811 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:11:46.023272  316811 start_flags.go:323] config:
	{Name:download-only-501770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-501770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:11:46.023393  316811 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:46.024843  316811 out.go:97] Starting control plane node download-only-501770 in cluster download-only-501770
	I0229 01:11:46.024856  316811 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 01:11:46.540070  316811 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0229 01:11:46.540129  316811 cache.go:56] Caching tarball of preloaded images
	I0229 01:11:46.540325  316811 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 01:11:46.542220  316811 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 01:11:46.542244  316811 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0229 01:11:46.660236  316811 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-501770"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-501770
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-344224 --alsologtostderr --binary-mirror http://127.0.0.1:37543 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-344224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-344224
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (105.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-476423 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-476423 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m44.515378285s)
helpers_test.go:175: Cleaning up "offline-containerd-476423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-476423
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-476423: (1.067581938s)
--- PASS: TestOffline (105.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-026134
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-026134: exit status 85 (61.750497ms)

                                                
                                                
-- stdout --
	* Profile "addons-026134" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-026134"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-026134
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-026134: exit status 85 (64.849905ms)

                                                
                                                
-- stdout --
	* Profile "addons-026134" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-026134"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (147.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-026134 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-026134 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.913778192s)
--- PASS: TestAddons/Setup (147.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 28.780535ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-58pkt" [6eeacb43-6e25-49f4-b72a-efd881dee77e] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003750991s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kwpjf" [314539bb-0cb6-4992-b1eb-e26cf0e76555] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006544571s
addons_test.go:340: (dbg) Run:  kubectl --context addons-026134 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-026134 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-026134 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.590264676s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 ip
2024/02/29 01:14:44 [DEBUG] GET http://192.168.39.160:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-026134 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-026134 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-026134 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [13e00651-d230-446b-8eca-fcc27871a69d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [13e00651-d230-446b-8eca-fcc27871a69d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004799169s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-026134 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.160
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-026134 addons disable ingress-dns --alsologtostderr -v=1: (1.056559689s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-026134 addons disable ingress --alsologtostderr -v=1: (7.844748629s)
--- PASS: TestAddons/parallel/Ingress (23.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ltnsc" [f74ffdb2-e055-4c5f-8256-87014d99a98a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005027266s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-026134
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-026134: (6.109957302s)
--- PASS: TestAddons/parallel/InspektorGadget (11.12s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 8.572026ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-kzxtj" [74b88b05-1cc8-4e3f-af2b-9b8f2ad4206f] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004847367s
addons_test.go:415: (dbg) Run:  kubectl --context addons-026134 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.87s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.37s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.1243ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-fgtdl" [2b76f5b7-3a87-433a-a11b-998dba40542c] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005192296s
addons_test.go:473: (dbg) Run:  kubectl --context addons-026134 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-026134 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.437595611s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (84.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 29.562443ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-026134 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-026134 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [912a625c-ac4d-4654-b201-5016d5a6c7a8] Pending
helpers_test.go:344: "task-pv-pod" [912a625c-ac4d-4654-b201-5016d5a6c7a8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [912a625c-ac4d-4654-b201-5016d5a6c7a8] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003809454s
addons_test.go:584: (dbg) Run:  kubectl --context addons-026134 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-026134 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-026134 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-026134 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-026134 delete pod task-pv-pod: (1.279128648s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-026134 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-026134 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-026134 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [935bb859-2a1b-4faf-84a3-6f668543eee9] Pending
helpers_test.go:344: "task-pv-pod-restore" [935bb859-2a1b-4faf-84a3-6f668543eee9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [935bb859-2a1b-4faf-84a3-6f668543eee9] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005952498s
addons_test.go:626: (dbg) Run:  kubectl --context addons-026134 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-026134 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-026134 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-026134 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.922408894s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (84.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-026134 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-026134 --alsologtostderr -v=1: (1.868342003s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-58brw" [965d9a7a-42a5-4b6c-a11e-882160040ab0] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-58brw" [965d9a7a-42a5-4b6c-a11e-882160040ab0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-58brw" [965d9a7a-42a5-4b6c-a11e-882160040ab0] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004214369s
--- PASS: TestAddons/parallel/Headlamp (13.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-bt5md" [cff7f4b6-a792-4a18-a2dc-9deb8fbd2c2f] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006764918s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-026134
--- PASS: TestAddons/parallel/CloudSpanner (5.87s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-026134 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-026134 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d9e89906-216d-4abd-ab1f-4d6a52088527] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d9e89906-216d-4abd-ab1f-4d6a52088527] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d9e89906-216d-4abd-ab1f-4d6a52088527] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004729195s
addons_test.go:891: (dbg) Run:  kubectl --context addons-026134 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 ssh "cat /opt/local-path-provisioner/pvc-e97d7705-012d-43e7-a9fc-c9831dc46ca3_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-026134 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-026134 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-026134 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-026134 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.70477155s)
--- PASS: TestAddons/parallel/LocalPath (55.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lxcxb" [6ec3ae6a-e892-49c5-999d-1b3c26e81158] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009009966s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-026134
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-spk4s" [00dd8664-76a4-42dc-b3bf-b08be11c14e0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004323057s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-026134 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-026134 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-026134
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-026134: (1m32.234944221s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-026134
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-026134
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-026134
--- PASS: TestAddons/StoppedEnableDisable (92.56s)

                                                
                                    
x
+
TestCertOptions (81.94s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-900483 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-900483 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m20.443997723s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-900483 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-900483 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-900483 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-900483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-900483
--- PASS: TestCertOptions (81.94s)

                                                
                                    
x
+
TestCertExpiration (266.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-113971 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0229 01:59:17.673765  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:59:28.597484  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-113971 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m18.876884222s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-113971 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-113971 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (6.889138096s)
helpers_test.go:175: Cleaning up "cert-expiration-113971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-113971
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-113971: (1.029885255s)
--- PASS: TestCertExpiration (266.80s)

                                                
                                    
x
+
TestForceSystemdFlag (80.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-120821 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-120821 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m19.141168154s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-120821 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-120821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-120821
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-120821: (1.011092232s)
--- PASS: TestForceSystemdFlag (80.35s)

                                                
                                    
x
+
TestForceSystemdEnv (78.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-578529 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-578529 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m17.183102303s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-578529 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-578529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-578529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-578529: (1.009277131s)
--- PASS: TestForceSystemdEnv (78.40s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.45s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.45s)

                                                
                                    
x
+
TestErrorSpam/setup (43.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-689388 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-689388 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-689388 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-689388 --driver=kvm2  --container-runtime=containerd: (43.31749602s)
--- PASS: TestErrorSpam/setup (43.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 stop: (2.092466768s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-689388 --log_dir /tmp/nospam-689388 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/test/nested/copy/316336/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-601906 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0229 01:19:28.597522  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:28.603357  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:28.613654  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:28.633918  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:28.674215  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:28.754563  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:28.914971  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:29.235731  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:29.876705  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:31.157272  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:33.718038  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:38.839232  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:19:49.079834  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:20:09.561045  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-601906 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m38.222047648s)
--- PASS: TestFunctional/serial/StartWithProxy (98.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-601906 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-601906 --alsologtostderr -v=8: (6.199469984s)
functional_test.go:659: soft start took 6.200213539s for "functional-601906" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-601906 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 cache add registry.k8s.io/pause:3.1: (1.078304665s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 cache add registry.k8s.io/pause:3.3: (1.121787123s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 cache add registry.k8s.io/pause:latest: (1.118088849s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-601906 /tmp/TestFunctionalserialCacheCmdcacheadd_local3408069954/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cache add minikube-local-cache-test:functional-601906
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 cache add minikube-local-cache-test:functional-601906: (2.067357535s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cache delete minikube-local-cache-test:functional-601906
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-601906
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.57051ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 cache reload: (1.13820256s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 kubectl -- --context functional-601906 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-601906 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-601906 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0229 01:20:50.521366  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-601906 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.628719824s)
functional_test.go:757: restart took 37.62888428s for "functional-601906" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-601906 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 logs: (1.513850165s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 logs --file /tmp/TestFunctionalserialLogsFileCmd2406163160/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 logs --file /tmp/TestFunctionalserialLogsFileCmd2406163160/001/logs.txt: (1.456557724s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-601906 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-601906
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-601906: exit status 115 (292.780505ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.38:32033 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-601906 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-601906 delete -f testdata/invalidsvc.yaml: (1.036644901s)
--- PASS: TestFunctional/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 config get cpus: exit status 14 (70.873506ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 config get cpus: exit status 14 (61.752741ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-601906 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-601906 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 324269: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-601906 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-601906 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (143.96841ms)

                                                
                                                
-- stdout --
	* [functional-601906] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:21:40.173273  324284 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:21:40.173528  324284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:21:40.173539  324284 out.go:304] Setting ErrFile to fd 2...
	I0229 01:21:40.173544  324284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:21:40.173730  324284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:21:40.174342  324284 out.go:298] Setting JSON to false
	I0229 01:21:40.175479  324284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3844,"bootTime":1709165856,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:21:40.175547  324284 start.go:139] virtualization: kvm guest
	I0229 01:21:40.177358  324284 out.go:177] * [functional-601906] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:21:40.178880  324284 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:21:40.178599  324284 notify.go:220] Checking for updates...
	I0229 01:21:40.181053  324284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:21:40.182332  324284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 01:21:40.183558  324284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:21:40.184748  324284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:21:40.185864  324284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:21:40.187403  324284 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 01:21:40.187836  324284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:21:40.187878  324284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:21:40.203491  324284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0229 01:21:40.203920  324284 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:21:40.204533  324284 main.go:141] libmachine: Using API Version  1
	I0229 01:21:40.204548  324284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:21:40.204905  324284 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:21:40.205140  324284 main.go:141] libmachine: (functional-601906) Calling .DriverName
	I0229 01:21:40.205426  324284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:21:40.205708  324284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:21:40.205747  324284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:21:40.221775  324284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0229 01:21:40.222328  324284 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:21:40.222919  324284 main.go:141] libmachine: Using API Version  1
	I0229 01:21:40.222945  324284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:21:40.223314  324284 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:21:40.223518  324284 main.go:141] libmachine: (functional-601906) Calling .DriverName
	I0229 01:21:40.254803  324284 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 01:21:40.255942  324284 start.go:299] selected driver: kvm2
	I0229 01:21:40.255954  324284 start.go:903] validating driver "kvm2" against &{Name:functional-601906 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-601906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:21:40.256060  324284 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:21:40.258098  324284 out.go:177] 
	W0229 01:21:40.259311  324284 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 01:21:40.260458  324284 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-601906 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-601906 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-601906 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (146.033498ms)

                                                
                                                
-- stdout --
	* [functional-601906] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:21:40.470767  324339 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:21:40.471089  324339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:21:40.471100  324339 out.go:304] Setting ErrFile to fd 2...
	I0229 01:21:40.471105  324339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:21:40.471382  324339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:21:40.471891  324339 out.go:298] Setting JSON to false
	I0229 01:21:40.472889  324339 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3845,"bootTime":1709165856,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:21:40.472953  324339 start.go:139] virtualization: kvm guest
	I0229 01:21:40.474641  324339 out.go:177] * [functional-601906] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0229 01:21:40.475762  324339 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:21:40.475794  324339 notify.go:220] Checking for updates...
	I0229 01:21:40.476916  324339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:21:40.478126  324339 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 01:21:40.479286  324339 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 01:21:40.480374  324339 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:21:40.481422  324339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:21:40.482917  324339 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 01:21:40.483334  324339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:21:40.483407  324339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:21:40.502399  324339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I0229 01:21:40.502799  324339 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:21:40.503343  324339 main.go:141] libmachine: Using API Version  1
	I0229 01:21:40.503364  324339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:21:40.503712  324339 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:21:40.503919  324339 main.go:141] libmachine: (functional-601906) Calling .DriverName
	I0229 01:21:40.504183  324339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:21:40.504646  324339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:21:40.504697  324339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:21:40.519444  324339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
	I0229 01:21:40.519870  324339 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:21:40.520350  324339 main.go:141] libmachine: Using API Version  1
	I0229 01:21:40.520365  324339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:21:40.520688  324339 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:21:40.520901  324339 main.go:141] libmachine: (functional-601906) Calling .DriverName
	I0229 01:21:40.552178  324339 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0229 01:21:40.553477  324339 start.go:299] selected driver: kvm2
	I0229 01:21:40.553489  324339 start.go:903] validating driver "kvm2" against &{Name:functional-601906 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-601906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:21:40.553584  324339 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:21:40.555564  324339 out.go:177] 
	W0229 01:21:40.556722  324339 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 01:21:40.558098  324339 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-601906 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-601906 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-wj6lj" [cbbb1150-6fc2-4835-ba3a-c30ed4f14eef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-wj6lj" [cbbb1150-6fc2-4835-ba3a-c30ed4f14eef] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.007802925s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.38:31392
functional_test.go:1671: http://192.168.39.38:31392: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-wj6lj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.38:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.38:31392
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d4614e08-d2c4-41c5-a49f-f07f87d509e2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006247381s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-601906 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-601906 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-601906 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-601906 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-601906 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [90cf67cc-9eb7-49f4-8f2c-6a04a5e56821] Pending
helpers_test.go:344: "sp-pod" [90cf67cc-9eb7-49f4-8f2c-6a04a5e56821] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/02/29 01:21:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [90cf67cc-9eb7-49f4-8f2c-6a04a5e56821] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005094142s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-601906 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-601906 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-601906 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bfac1d5a-e922-4bc0-9d1d-29473afaf119] Pending
helpers_test.go:344: "sp-pod" [bfac1d5a-e922-4bc0-9d1d-29473afaf119] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bfac1d5a-e922-4bc0-9d1d-29473afaf119] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011605395s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-601906 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh -n functional-601906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cp functional-601906:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4053854383/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh -n functional-601906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh -n functional-601906 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-601906 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-bdkww" [d8227ed6-e3aa-4a65-b5ae-b4186e147ab0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-bdkww" [d8227ed6-e3aa-4a65-b5ae-b4186e147ab0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004110864s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-601906 exec mysql-859648c796-bdkww -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-601906 exec mysql-859648c796-bdkww -- mysql -ppassword -e "show databases;": exit status 1 (174.893766ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-601906 exec mysql-859648c796-bdkww -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-601906 exec mysql-859648c796-bdkww -- mysql -ppassword -e "show databases;": exit status 1 (184.420209ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-601906 exec mysql-859648c796-bdkww -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-601906 exec mysql-859648c796-bdkww -- mysql -ppassword -e "show databases;": exit status 1 (201.879594ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-601906 exec mysql-859648c796-bdkww -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/316336/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo cat /etc/test/nested/copy/316336/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/316336.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo cat /etc/ssl/certs/316336.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/316336.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo cat /usr/share/ca-certificates/316336.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3163362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo cat /etc/ssl/certs/3163362.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3163362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo cat /usr/share/ca-certificates/3163362.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-601906 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh "sudo systemctl is-active docker": exit status 1 (236.56363ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh "sudo systemctl is-active crio": exit status 1 (229.146478ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-601906 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-601906
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-601906
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-601906 image ls --format short --alsologtostderr:
I0229 01:21:42.537008  324564 out.go:291] Setting OutFile to fd 1 ...
I0229 01:21:42.537277  324564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:42.537286  324564 out.go:304] Setting ErrFile to fd 2...
I0229 01:21:42.537290  324564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:42.537441  324564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
I0229 01:21:42.537964  324564 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:42.538065  324564 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:42.538475  324564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:42.538522  324564 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:42.553539  324564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
I0229 01:21:42.554056  324564 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:42.554666  324564 main.go:141] libmachine: Using API Version  1
I0229 01:21:42.554695  324564 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:42.555016  324564 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:42.555210  324564 main.go:141] libmachine: (functional-601906) Calling .GetState
I0229 01:21:42.556849  324564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:42.556893  324564 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:42.572556  324564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
I0229 01:21:42.573023  324564 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:42.573458  324564 main.go:141] libmachine: Using API Version  1
I0229 01:21:42.573482  324564 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:42.573787  324564 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:42.574019  324564 main.go:141] libmachine: (functional-601906) Calling .DriverName
I0229 01:21:42.574242  324564 ssh_runner.go:195] Run: systemctl --version
I0229 01:21:42.574271  324564 main.go:141] libmachine: (functional-601906) Calling .GetSSHHostname
I0229 01:21:42.576961  324564 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:42.577342  324564 main.go:141] libmachine: (functional-601906) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:19:07", ip: ""} in network mk-functional-601906: {Iface:virbr1 ExpiryTime:2024-02-29 02:18:51 +0000 UTC Type:0 Mac:52:54:00:7b:19:07 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-601906 Clientid:01:52:54:00:7b:19:07}
I0229 01:21:42.577372  324564 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined IP address 192.168.39.38 and MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:42.577517  324564 main.go:141] libmachine: (functional-601906) Calling .GetSSHPort
I0229 01:21:42.577693  324564 main.go:141] libmachine: (functional-601906) Calling .GetSSHKeyPath
I0229 01:21:42.577857  324564 main.go:141] libmachine: (functional-601906) Calling .GetSSHUsername
I0229 01:21:42.578027  324564 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/functional-601906/id_rsa Username:docker}
I0229 01:21:42.653290  324564 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:21:42.694029  324564 main.go:141] libmachine: Making call to close driver server
I0229 01:21:42.694046  324564 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:42.694327  324564 main.go:141] libmachine: (functional-601906) DBG | Closing plugin on server side
I0229 01:21:42.694361  324564 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:42.694381  324564 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:21:42.694390  324564 main.go:141] libmachine: Making call to close driver server
I0229 01:21:42.694400  324564 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:42.694652  324564 main.go:141] libmachine: (functional-601906) DBG | Closing plugin on server side
I0229 01:21:42.694683  324564 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:42.694694  324564 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-601906 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| localhost/my-image                          | functional-601906  | sha256:df3292 | 775kB  |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/minikube-local-cache-test | functional-601906  | sha256:ac1537 | 1.01kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/google-containers/addon-resizer      | functional-601906  | sha256:ffd4cf | 10.8MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-601906 image ls --format table --alsologtostderr:
I0229 01:21:47.825058  325190 out.go:291] Setting OutFile to fd 1 ...
I0229 01:21:47.825197  325190 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:47.825208  325190 out.go:304] Setting ErrFile to fd 2...
I0229 01:21:47.825212  325190 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:47.825399  325190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
I0229 01:21:47.826007  325190 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:47.826165  325190 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:47.826570  325190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:47.826612  325190 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:47.842614  325190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
I0229 01:21:47.843104  325190 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:47.843830  325190 main.go:141] libmachine: Using API Version  1
I0229 01:21:47.843862  325190 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:47.844316  325190 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:47.844523  325190 main.go:141] libmachine: (functional-601906) Calling .GetState
I0229 01:21:47.846791  325190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:47.846858  325190 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:47.861828  325190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
I0229 01:21:47.862323  325190 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:47.862927  325190 main.go:141] libmachine: Using API Version  1
I0229 01:21:47.862960  325190 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:47.863365  325190 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:47.863582  325190 main.go:141] libmachine: (functional-601906) Calling .DriverName
I0229 01:21:47.863828  325190 ssh_runner.go:195] Run: systemctl --version
I0229 01:21:47.863897  325190 main.go:141] libmachine: (functional-601906) Calling .GetSSHHostname
I0229 01:21:47.866994  325190 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:47.867357  325190 main.go:141] libmachine: (functional-601906) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:19:07", ip: ""} in network mk-functional-601906: {Iface:virbr1 ExpiryTime:2024-02-29 02:18:51 +0000 UTC Type:0 Mac:52:54:00:7b:19:07 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-601906 Clientid:01:52:54:00:7b:19:07}
I0229 01:21:47.867387  325190 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined IP address 192.168.39.38 and MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:47.867488  325190 main.go:141] libmachine: (functional-601906) Calling .GetSSHPort
I0229 01:21:47.867660  325190 main.go:141] libmachine: (functional-601906) Calling .GetSSHKeyPath
I0229 01:21:47.867840  325190 main.go:141] libmachine: (functional-601906) Calling .GetSSHUsername
I0229 01:21:47.867985  325190 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/functional-601906/id_rsa Username:docker}
I0229 01:21:47.957209  325190 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:21:48.008971  325190 main.go:141] libmachine: Making call to close driver server
I0229 01:21:48.008991  325190 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:48.009276  325190 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:48.009320  325190 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:21:48.009340  325190 main.go:141] libmachine: Making call to close driver server
I0229 01:21:48.009352  325190 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:48.009564  325190 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:48.009576  325190 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-601906 image ls --format json --alsologtostderr:
[{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],
"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:df32920545ac1ac3fba63600a73179cfcdc1e19fd9595905584290cd827adfbc","repoDigests":[],"repoTags":["localhost/my-image:functional-601906"],"size":"774901"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:ac15372550870708ebb749239500476b00ed49e33cba722a9746508020a5b5f8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-601906"],"size"
:"1006"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/
kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:ffd4cfbbe7
53e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-601906"],"size":"10823156"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-601906 image ls --format json --alsologtostderr:
I0229 01:21:47.547989  325142 out.go:291] Setting OutFile to fd 1 ...
I0229 01:21:47.548102  325142 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:47.548107  325142 out.go:304] Setting ErrFile to fd 2...
I0229 01:21:47.548111  325142 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:47.548306  325142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
I0229 01:21:47.548923  325142 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:47.549023  325142 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:47.549360  325142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:47.549416  325142 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:47.566177  325142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38523
I0229 01:21:47.566701  325142 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:47.567457  325142 main.go:141] libmachine: Using API Version  1
I0229 01:21:47.567481  325142 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:47.567897  325142 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:47.568136  325142 main.go:141] libmachine: (functional-601906) Calling .GetState
I0229 01:21:47.570392  325142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:47.570440  325142 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:47.585797  325142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
I0229 01:21:47.586241  325142 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:47.586809  325142 main.go:141] libmachine: Using API Version  1
I0229 01:21:47.586835  325142 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:47.587223  325142 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:47.587463  325142 main.go:141] libmachine: (functional-601906) Calling .DriverName
I0229 01:21:47.587688  325142 ssh_runner.go:195] Run: systemctl --version
I0229 01:21:47.587726  325142 main.go:141] libmachine: (functional-601906) Calling .GetSSHHostname
I0229 01:21:47.590841  325142 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:47.591290  325142 main.go:141] libmachine: (functional-601906) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:19:07", ip: ""} in network mk-functional-601906: {Iface:virbr1 ExpiryTime:2024-02-29 02:18:51 +0000 UTC Type:0 Mac:52:54:00:7b:19:07 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-601906 Clientid:01:52:54:00:7b:19:07}
I0229 01:21:47.591343  325142 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined IP address 192.168.39.38 and MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:47.591439  325142 main.go:141] libmachine: (functional-601906) Calling .GetSSHPort
I0229 01:21:47.591603  325142 main.go:141] libmachine: (functional-601906) Calling .GetSSHKeyPath
I0229 01:21:47.591743  325142 main.go:141] libmachine: (functional-601906) Calling .GetSSHUsername
I0229 01:21:47.591865  325142 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/functional-601906/id_rsa Username:docker}
I0229 01:21:47.680240  325142 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:21:47.760139  325142 main.go:141] libmachine: Making call to close driver server
I0229 01:21:47.760156  325142 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:47.760414  325142 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:47.760430  325142 main.go:141] libmachine: (functional-601906) DBG | Closing plugin on server side
I0229 01:21:47.760441  325142 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:21:47.760451  325142 main.go:141] libmachine: Making call to close driver server
I0229 01:21:47.760458  325142 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:47.760704  325142 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:47.760722  325142 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:21:47.760723  325142 main.go:141] libmachine: (functional-601906) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-601906 image ls --format yaml --alsologtostderr:
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-601906
size: "10823156"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:ac15372550870708ebb749239500476b00ed49e33cba722a9746508020a5b5f8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-601906
size: "1006"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-601906 image ls --format yaml --alsologtostderr:
I0229 01:21:42.755757  324588 out.go:291] Setting OutFile to fd 1 ...
I0229 01:21:42.755892  324588 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:42.755904  324588 out.go:304] Setting ErrFile to fd 2...
I0229 01:21:42.755910  324588 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:42.756111  324588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
I0229 01:21:42.756676  324588 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:42.756817  324588 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:42.757192  324588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:42.757244  324588 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:42.771891  324588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
I0229 01:21:42.772363  324588 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:42.772933  324588 main.go:141] libmachine: Using API Version  1
I0229 01:21:42.772958  324588 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:42.773298  324588 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:42.773492  324588 main.go:141] libmachine: (functional-601906) Calling .GetState
I0229 01:21:42.775221  324588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:42.775267  324588 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:42.789341  324588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
I0229 01:21:42.789708  324588 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:42.790194  324588 main.go:141] libmachine: Using API Version  1
I0229 01:21:42.790221  324588 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:42.790550  324588 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:42.790747  324588 main.go:141] libmachine: (functional-601906) Calling .DriverName
I0229 01:21:42.790955  324588 ssh_runner.go:195] Run: systemctl --version
I0229 01:21:42.790978  324588 main.go:141] libmachine: (functional-601906) Calling .GetSSHHostname
I0229 01:21:42.793543  324588 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:42.793931  324588 main.go:141] libmachine: (functional-601906) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:19:07", ip: ""} in network mk-functional-601906: {Iface:virbr1 ExpiryTime:2024-02-29 02:18:51 +0000 UTC Type:0 Mac:52:54:00:7b:19:07 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-601906 Clientid:01:52:54:00:7b:19:07}
I0229 01:21:42.793957  324588 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined IP address 192.168.39.38 and MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:42.794117  324588 main.go:141] libmachine: (functional-601906) Calling .GetSSHPort
I0229 01:21:42.794291  324588 main.go:141] libmachine: (functional-601906) Calling .GetSSHKeyPath
I0229 01:21:42.794439  324588 main.go:141] libmachine: (functional-601906) Calling .GetSSHUsername
I0229 01:21:42.794560  324588 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/functional-601906/id_rsa Username:docker}
I0229 01:21:42.872857  324588 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:21:42.912301  324588 main.go:141] libmachine: Making call to close driver server
I0229 01:21:42.912334  324588 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:42.912603  324588 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:42.912631  324588 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:21:42.912641  324588 main.go:141] libmachine: Making call to close driver server
I0229 01:21:42.912649  324588 main.go:141] libmachine: (functional-601906) DBG | Closing plugin on server side
I0229 01:21:42.912661  324588 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:42.912882  324588 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:42.912895  324588 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh pgrep buildkitd: exit status 1 (209.533085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image build -t localhost/my-image:functional-601906 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 image build -t localhost/my-image:functional-601906 testdata/build --alsologtostderr: (4.482540423s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-601906 image build -t localhost/my-image:functional-601906 testdata/build --alsologtostderr:
I0229 01:21:43.184869  324642 out.go:291] Setting OutFile to fd 1 ...
I0229 01:21:43.184977  324642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:43.184982  324642 out.go:304] Setting ErrFile to fd 2...
I0229 01:21:43.184987  324642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:21:43.185187  324642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
I0229 01:21:43.185780  324642 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:43.186333  324642 config.go:182] Loaded profile config "functional-601906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 01:21:43.186803  324642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:43.186853  324642 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:43.202815  324642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
I0229 01:21:43.203310  324642 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:43.203912  324642 main.go:141] libmachine: Using API Version  1
I0229 01:21:43.203943  324642 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:43.204306  324642 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:43.204494  324642 main.go:141] libmachine: (functional-601906) Calling .GetState
I0229 01:21:43.206427  324642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:21:43.206470  324642 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:21:43.221395  324642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
I0229 01:21:43.221797  324642 main.go:141] libmachine: () Calling .GetVersion
I0229 01:21:43.222312  324642 main.go:141] libmachine: Using API Version  1
I0229 01:21:43.222340  324642 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:21:43.222670  324642 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:21:43.222873  324642 main.go:141] libmachine: (functional-601906) Calling .DriverName
I0229 01:21:43.223072  324642 ssh_runner.go:195] Run: systemctl --version
I0229 01:21:43.223100  324642 main.go:141] libmachine: (functional-601906) Calling .GetSSHHostname
I0229 01:21:43.225797  324642 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:43.226257  324642 main.go:141] libmachine: (functional-601906) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:19:07", ip: ""} in network mk-functional-601906: {Iface:virbr1 ExpiryTime:2024-02-29 02:18:51 +0000 UTC Type:0 Mac:52:54:00:7b:19:07 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-601906 Clientid:01:52:54:00:7b:19:07}
I0229 01:21:43.226288  324642 main.go:141] libmachine: (functional-601906) DBG | domain functional-601906 has defined IP address 192.168.39.38 and MAC address 52:54:00:7b:19:07 in network mk-functional-601906
I0229 01:21:43.226492  324642 main.go:141] libmachine: (functional-601906) Calling .GetSSHPort
I0229 01:21:43.226670  324642 main.go:141] libmachine: (functional-601906) Calling .GetSSHKeyPath
I0229 01:21:43.226848  324642 main.go:141] libmachine: (functional-601906) Calling .GetSSHUsername
I0229 01:21:43.227001  324642 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/functional-601906/id_rsa Username:docker}
I0229 01:21:43.331964  324642 build_images.go:151] Building image from path: /tmp/build.1968876232.tar
I0229 01:21:43.332034  324642 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 01:21:43.369252  324642 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1968876232.tar
I0229 01:21:43.382772  324642 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1968876232.tar: stat -c "%s %y" /var/lib/minikube/build/build.1968876232.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1968876232.tar': No such file or directory
I0229 01:21:43.382827  324642 ssh_runner.go:362] scp /tmp/build.1968876232.tar --> /var/lib/minikube/build/build.1968876232.tar (3072 bytes)
I0229 01:21:43.428351  324642 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1968876232
I0229 01:21:43.442037  324642 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1968876232 -xf /var/lib/minikube/build/build.1968876232.tar
I0229 01:21:43.455649  324642 containerd.go:379] Building image: /var/lib/minikube/build/build.1968876232
I0229 01:21:43.455747  324642 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1968876232 --local dockerfile=/var/lib/minikube/build/build.1968876232 --output type=image,name=localhost/my-image:functional-601906
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:80c531b2032019457afae4113cf780b62f4a8fb59d8158c6a23772341857ecc3 0.0s done
#8 exporting config sha256:df32920545ac1ac3fba63600a73179cfcdc1e19fd9595905584290cd827adfbc 0.0s done
#8 naming to localhost/my-image:functional-601906 done
#8 DONE 0.3s
I0229 01:21:47.570880  324642 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1968876232 --local dockerfile=/var/lib/minikube/build/build.1968876232 --output type=image,name=localhost/my-image:functional-601906: (4.115099215s)
I0229 01:21:47.570947  324642 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1968876232
I0229 01:21:47.588735  324642 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1968876232.tar
I0229 01:21:47.603993  324642 build_images.go:207] Built localhost/my-image:functional-601906 from /tmp/build.1968876232.tar
I0229 01:21:47.604023  324642 build_images.go:123] succeeded building to: functional-601906
I0229 01:21:47.604029  324642 build_images.go:124] failed building to: 
I0229 01:21:47.604053  324642 main.go:141] libmachine: Making call to close driver server
I0229 01:21:47.604066  324642 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:47.604347  324642 main.go:141] libmachine: (functional-601906) DBG | Closing plugin on server side
I0229 01:21:47.604388  324642 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:47.604396  324642 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:21:47.604405  324642 main.go:141] libmachine: Making call to close driver server
I0229 01:21:47.604413  324642 main.go:141] libmachine: (functional-601906) Calling .Close
I0229 01:21:47.604640  324642 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:21:47.604662  324642 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.060399441s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-601906
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image load --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 image load --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr: (4.399123233s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-601906 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-601906 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-zlk7x" [c3d42d9b-62a0-43e9-997d-fdbc65028237] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-zlk7x" [c3d42d9b-62a0-43e9-997d-fdbc65028237] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.004613557s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image load --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 image load --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr: (2.636021618s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.977056728s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-601906
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image load --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 image load --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr: (5.032908681s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image save gcr.io/google-containers/addon-resizer:functional-601906 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 image save gcr.io/google-containers/addon-resizer:functional-601906 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.174531237s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image rm gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.47200772s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-601906
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 image save --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-601906 image save --daemon gcr.io/google-containers/addon-resizer:functional-601906 --alsologtostderr: (1.243730961s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-601906
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "203.865953ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.865406ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "208.282679ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "68.551897ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdany-port2906923021/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709169696906028294" to /tmp/TestFunctionalparallelMountCmdany-port2906923021/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709169696906028294" to /tmp/TestFunctionalparallelMountCmdany-port2906923021/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709169696906028294" to /tmp/TestFunctionalparallelMountCmdany-port2906923021/001/test-1709169696906028294
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (208.812164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 01:21 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 01:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 01:21 test-1709169696906028294
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh cat /mount-9p/test-1709169696906028294
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-601906 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b510ce3d-8577-4ac4-905c-8f0437e9524e] Pending
helpers_test.go:344: "busybox-mount" [b510ce3d-8577-4ac4-905c-8f0437e9524e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b510ce3d-8577-4ac4-905c-8f0437e9524e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b510ce3d-8577-4ac4-905c-8f0437e9524e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00664219s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-601906 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdany-port2906923021/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 service list -o json
functional_test.go:1490: Took "557.807843ms" to run "out/minikube-linux-amd64 -p functional-601906 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.38:30179
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.38:30179
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdspecific-port2091138681/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.805293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdspecific-port2091138681/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh "sudo umount -f /mount-9p": exit status 1 (212.511907ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-601906 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdspecific-port2091138681/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3134110796/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3134110796/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3134110796/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T" /mount1: exit status 1 (272.506747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-601906 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-601906 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3134110796/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3134110796/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-601906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3134110796/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-601906
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-601906
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-601906
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-114599 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0229 01:31:14.620888  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:31:42.310409  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-114599 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m40.151112369s)
--- PASS: TestJSONOutput/start/Command (100.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-114599 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-114599 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-114599 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-114599 --output=json --user=testUser: (7.109084206s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-988939 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-988939 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.602164ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70ae717c-0704-4900-95e1-80897573d367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-988939] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"51b94685-9db1-445c-847d-c08b9c6c4977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18063"}}
	{"specversion":"1.0","id":"8141f19a-1b76-49b9-bf6b-76703b11cf6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"938a00c7-abd3-4cfb-bedc-3781ee239930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig"}}
	{"specversion":"1.0","id":"74bcc881-6a1d-4a7f-b318-0f7ef6efdb04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube"}}
	{"specversion":"1.0","id":"e61a034d-6ea6-4d79-8c1b-24774cad93ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7cf7072d-3fd6-434c-ba36-ce96fc15ae1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd64970e-d96d-4556-b305-24078f908786","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-988939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-988939
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-583289 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-583289 --driver=kvm2  --container-runtime=containerd: (45.947378953s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-585912 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-585912 --driver=kvm2  --container-runtime=containerd: (46.314602081s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-583289
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-585912
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-585912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-585912
helpers_test.go:175: Cleaning up "first-583289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-583289
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-583289: (1.00660072s)
--- PASS: TestMinikubeProfile (95.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-829984 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0229 01:34:28.597331  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-829984 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.640251997s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-829984 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-829984 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-846529 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-846529 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.529998621s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-846529 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-846529 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-829984 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-846529 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-846529 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-846529
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-846529: (1.229832872s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-846529
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-846529: (21.963317634s)
--- PASS: TestMountStart/serial/RestartStopped (22.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-846529 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-846529 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (185.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-675288 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0229 01:35:51.644309  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:36:14.620926  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-675288 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m4.644160579s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (185.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-675288 -- rollout status deployment/busybox: (3.915222292s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-mdkm9 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-rdcl2 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-mdkm9 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-rdcl2 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-mdkm9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-rdcl2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-mdkm9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-mdkm9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-rdcl2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-675288 -- exec busybox-5b5d89c9d6-rdcl2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-675288 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-675288 -v 3 --alsologtostderr: (39.857377051s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.45s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-675288 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp testdata/cp-test.txt multinode-675288:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3759794240/001/cp-test_multinode-675288.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288:/home/docker/cp-test.txt multinode-675288-m02:/home/docker/cp-test_multinode-675288_multinode-675288-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m02 "sudo cat /home/docker/cp-test_multinode-675288_multinode-675288-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288:/home/docker/cp-test.txt multinode-675288-m03:/home/docker/cp-test_multinode-675288_multinode-675288-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288 "sudo cat /home/docker/cp-test.txt"
E0229 01:39:28.597411  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m03 "sudo cat /home/docker/cp-test_multinode-675288_multinode-675288-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp testdata/cp-test.txt multinode-675288-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3759794240/001/cp-test_multinode-675288-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288-m02:/home/docker/cp-test.txt multinode-675288:/home/docker/cp-test_multinode-675288-m02_multinode-675288.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288 "sudo cat /home/docker/cp-test_multinode-675288-m02_multinode-675288.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288-m02:/home/docker/cp-test.txt multinode-675288-m03:/home/docker/cp-test_multinode-675288-m02_multinode-675288-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m03 "sudo cat /home/docker/cp-test_multinode-675288-m02_multinode-675288-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp testdata/cp-test.txt multinode-675288-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3759794240/001/cp-test_multinode-675288-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288-m03:/home/docker/cp-test.txt multinode-675288:/home/docker/cp-test_multinode-675288-m03_multinode-675288.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288 "sudo cat /home/docker/cp-test_multinode-675288-m03_multinode-675288.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 cp multinode-675288-m03:/home/docker/cp-test.txt multinode-675288-m02:/home/docker/cp-test_multinode-675288-m03_multinode-675288-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 ssh -n multinode-675288-m02 "sudo cat /home/docker/cp-test_multinode-675288-m03_multinode-675288-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-675288 node stop m03: (1.405181176s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-675288 status: exit status 7 (439.585401ms)

                                                
                                                
-- stdout --
	multinode-675288
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-675288-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-675288-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr: exit status 7 (440.469506ms)

                                                
                                                
-- stdout --
	multinode-675288
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-675288-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-675288-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:39:35.323452  332472 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:39:35.323594  332472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:39:35.323604  332472 out.go:304] Setting ErrFile to fd 2...
	I0229 01:39:35.323611  332472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:39:35.323819  332472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:39:35.324009  332472 out.go:298] Setting JSON to false
	I0229 01:39:35.324048  332472 mustload.go:65] Loading cluster: multinode-675288
	I0229 01:39:35.324142  332472 notify.go:220] Checking for updates...
	I0229 01:39:35.324522  332472 config.go:182] Loaded profile config "multinode-675288": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 01:39:35.324542  332472 status.go:255] checking status of multinode-675288 ...
	I0229 01:39:35.324983  332472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:39:35.325065  332472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:35.340097  332472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0229 01:39:35.340529  332472 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:35.341148  332472 main.go:141] libmachine: Using API Version  1
	I0229 01:39:35.341166  332472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:35.341516  332472 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:35.341720  332472 main.go:141] libmachine: (multinode-675288) Calling .GetState
	I0229 01:39:35.343164  332472 status.go:330] multinode-675288 host status = "Running" (err=<nil>)
	I0229 01:39:35.343181  332472 host.go:66] Checking if "multinode-675288" exists ...
	I0229 01:39:35.343457  332472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:39:35.343497  332472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:35.357963  332472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0229 01:39:35.358386  332472 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:35.358840  332472 main.go:141] libmachine: Using API Version  1
	I0229 01:39:35.358863  332472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:35.359177  332472 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:35.359369  332472 main.go:141] libmachine: (multinode-675288) Calling .GetIP
	I0229 01:39:35.362232  332472 main.go:141] libmachine: (multinode-675288) DBG | domain multinode-675288 has defined MAC address 52:54:00:89:99:ec in network mk-multinode-675288
	I0229 01:39:35.362617  332472 main.go:141] libmachine: (multinode-675288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:99:ec", ip: ""} in network mk-multinode-675288: {Iface:virbr1 ExpiryTime:2024-02-29 02:35:49 +0000 UTC Type:0 Mac:52:54:00:89:99:ec Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-675288 Clientid:01:52:54:00:89:99:ec}
	I0229 01:39:35.362650  332472 main.go:141] libmachine: (multinode-675288) DBG | domain multinode-675288 has defined IP address 192.168.39.218 and MAC address 52:54:00:89:99:ec in network mk-multinode-675288
	I0229 01:39:35.362769  332472 host.go:66] Checking if "multinode-675288" exists ...
	I0229 01:39:35.363025  332472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:39:35.363072  332472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:35.378028  332472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45295
	I0229 01:39:35.378518  332472 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:35.379105  332472 main.go:141] libmachine: Using API Version  1
	I0229 01:39:35.379132  332472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:35.379441  332472 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:35.379674  332472 main.go:141] libmachine: (multinode-675288) Calling .DriverName
	I0229 01:39:35.379878  332472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 01:39:35.379906  332472 main.go:141] libmachine: (multinode-675288) Calling .GetSSHHostname
	I0229 01:39:35.382452  332472 main.go:141] libmachine: (multinode-675288) DBG | domain multinode-675288 has defined MAC address 52:54:00:89:99:ec in network mk-multinode-675288
	I0229 01:39:35.382918  332472 main.go:141] libmachine: (multinode-675288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:99:ec", ip: ""} in network mk-multinode-675288: {Iface:virbr1 ExpiryTime:2024-02-29 02:35:49 +0000 UTC Type:0 Mac:52:54:00:89:99:ec Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-675288 Clientid:01:52:54:00:89:99:ec}
	I0229 01:39:35.382955  332472 main.go:141] libmachine: (multinode-675288) DBG | domain multinode-675288 has defined IP address 192.168.39.218 and MAC address 52:54:00:89:99:ec in network mk-multinode-675288
	I0229 01:39:35.383088  332472 main.go:141] libmachine: (multinode-675288) Calling .GetSSHPort
	I0229 01:39:35.383253  332472 main.go:141] libmachine: (multinode-675288) Calling .GetSSHKeyPath
	I0229 01:39:35.383416  332472 main.go:141] libmachine: (multinode-675288) Calling .GetSSHUsername
	I0229 01:39:35.383555  332472 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/multinode-675288/id_rsa Username:docker}
	I0229 01:39:35.470308  332472 ssh_runner.go:195] Run: systemctl --version
	I0229 01:39:35.477003  332472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:39:35.491974  332472 kubeconfig.go:92] found "multinode-675288" server: "https://192.168.39.218:8443"
	I0229 01:39:35.492005  332472 api_server.go:166] Checking apiserver status ...
	I0229 01:39:35.492043  332472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:39:35.507674  332472 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1128/cgroup
	W0229 01:39:35.519343  332472 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1128/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:35.519409  332472 ssh_runner.go:195] Run: ls
	I0229 01:39:35.524382  332472 api_server.go:253] Checking apiserver healthz at https://192.168.39.218:8443/healthz ...
	I0229 01:39:35.530161  332472 api_server.go:279] https://192.168.39.218:8443/healthz returned 200:
	ok
	I0229 01:39:35.530197  332472 status.go:421] multinode-675288 apiserver status = Running (err=<nil>)
	I0229 01:39:35.530223  332472 status.go:257] multinode-675288 status: &{Name:multinode-675288 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:39:35.530264  332472 status.go:255] checking status of multinode-675288-m02 ...
	I0229 01:39:35.530559  332472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:39:35.530595  332472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:35.545832  332472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I0229 01:39:35.546237  332472 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:35.546706  332472 main.go:141] libmachine: Using API Version  1
	I0229 01:39:35.546729  332472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:35.547025  332472 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:35.547221  332472 main.go:141] libmachine: (multinode-675288-m02) Calling .GetState
	I0229 01:39:35.548807  332472 status.go:330] multinode-675288-m02 host status = "Running" (err=<nil>)
	I0229 01:39:35.548827  332472 host.go:66] Checking if "multinode-675288-m02" exists ...
	I0229 01:39:35.549088  332472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:39:35.549135  332472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:35.564039  332472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0229 01:39:35.564387  332472 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:35.564862  332472 main.go:141] libmachine: Using API Version  1
	I0229 01:39:35.564884  332472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:35.565215  332472 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:35.565408  332472 main.go:141] libmachine: (multinode-675288-m02) Calling .GetIP
	I0229 01:39:35.567931  332472 main.go:141] libmachine: (multinode-675288-m02) DBG | domain multinode-675288-m02 has defined MAC address 52:54:00:7c:b3:9c in network mk-multinode-675288
	I0229 01:39:35.568360  332472 main.go:141] libmachine: (multinode-675288-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:b3:9c", ip: ""} in network mk-multinode-675288: {Iface:virbr1 ExpiryTime:2024-02-29 02:36:58 +0000 UTC Type:0 Mac:52:54:00:7c:b3:9c Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-675288-m02 Clientid:01:52:54:00:7c:b3:9c}
	I0229 01:39:35.568401  332472 main.go:141] libmachine: (multinode-675288-m02) DBG | domain multinode-675288-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:7c:b3:9c in network mk-multinode-675288
	I0229 01:39:35.568543  332472 host.go:66] Checking if "multinode-675288-m02" exists ...
	I0229 01:39:35.568827  332472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:39:35.568864  332472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:35.583059  332472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0229 01:39:35.583415  332472 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:35.583826  332472 main.go:141] libmachine: Using API Version  1
	I0229 01:39:35.583849  332472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:35.584191  332472 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:35.584367  332472 main.go:141] libmachine: (multinode-675288-m02) Calling .DriverName
	I0229 01:39:35.584531  332472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 01:39:35.584550  332472 main.go:141] libmachine: (multinode-675288-m02) Calling .GetSSHHostname
	I0229 01:39:35.586888  332472 main.go:141] libmachine: (multinode-675288-m02) DBG | domain multinode-675288-m02 has defined MAC address 52:54:00:7c:b3:9c in network mk-multinode-675288
	I0229 01:39:35.587297  332472 main.go:141] libmachine: (multinode-675288-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:b3:9c", ip: ""} in network mk-multinode-675288: {Iface:virbr1 ExpiryTime:2024-02-29 02:36:58 +0000 UTC Type:0 Mac:52:54:00:7c:b3:9c Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-675288-m02 Clientid:01:52:54:00:7c:b3:9c}
	I0229 01:39:35.587322  332472 main.go:141] libmachine: (multinode-675288-m02) DBG | domain multinode-675288-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:7c:b3:9c in network mk-multinode-675288
	I0229 01:39:35.587394  332472 main.go:141] libmachine: (multinode-675288-m02) Calling .GetSSHPort
	I0229 01:39:35.587574  332472 main.go:141] libmachine: (multinode-675288-m02) Calling .GetSSHKeyPath
	I0229 01:39:35.587721  332472 main.go:141] libmachine: (multinode-675288-m02) Calling .GetSSHUsername
	I0229 01:39:35.587896  332472 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/multinode-675288-m02/id_rsa Username:docker}
	I0229 01:39:35.669817  332472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:39:35.685059  332472 status.go:257] multinode-675288-m02 status: &{Name:multinode-675288-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:39:35.685113  332472 status.go:255] checking status of multinode-675288-m03 ...
	I0229 01:39:35.685537  332472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:39:35.685590  332472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:35.702443  332472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44845
	I0229 01:39:35.702898  332472 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:35.703433  332472 main.go:141] libmachine: Using API Version  1
	I0229 01:39:35.703464  332472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:35.703799  332472 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:35.703980  332472 main.go:141] libmachine: (multinode-675288-m03) Calling .GetState
	I0229 01:39:35.705550  332472 status.go:330] multinode-675288-m03 host status = "Stopped" (err=<nil>)
	I0229 01:39:35.705564  332472 status.go:343] host is not running, skipping remaining checks
	I0229 01:39:35.705587  332472 status.go:257] multinode-675288-m03 status: &{Name:multinode-675288-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-675288 node start m03 --alsologtostderr: (23.334618257s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (301.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-675288
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-675288
E0229 01:41:14.621338  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:42:37.673412  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-675288: (3m4.764541465s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-675288 --wait=true -v=8 --alsologtostderr
E0229 01:44:28.597558  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-675288 --wait=true -v=8 --alsologtostderr: (1m57.095049501s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-675288
--- PASS: TestMultiNode/serial/RestartKeepsNodes (301.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-675288 node delete m03: (1.195934246s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 stop
E0229 01:46:14.620799  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-675288 stop: (3m3.553337635s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-675288 status: exit status 7 (93.377071ms)

                                                
                                                
-- stdout --
	multinode-675288
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-675288-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr: exit status 7 (94.964263ms)

                                                
                                                
-- stdout --
	multinode-675288
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-675288-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:48:07.120067  334613 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:48:07.120329  334613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:48:07.120338  334613 out.go:304] Setting ErrFile to fd 2...
	I0229 01:48:07.120342  334613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:48:07.120525  334613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 01:48:07.120685  334613 out.go:298] Setting JSON to false
	I0229 01:48:07.120711  334613 mustload.go:65] Loading cluster: multinode-675288
	I0229 01:48:07.120750  334613 notify.go:220] Checking for updates...
	I0229 01:48:07.121082  334613 config.go:182] Loaded profile config "multinode-675288": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 01:48:07.121099  334613 status.go:255] checking status of multinode-675288 ...
	I0229 01:48:07.121526  334613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:48:07.121580  334613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:48:07.137171  334613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0229 01:48:07.137612  334613 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:48:07.138255  334613 main.go:141] libmachine: Using API Version  1
	I0229 01:48:07.138275  334613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:48:07.138726  334613 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:48:07.139005  334613 main.go:141] libmachine: (multinode-675288) Calling .GetState
	I0229 01:48:07.140548  334613 status.go:330] multinode-675288 host status = "Stopped" (err=<nil>)
	I0229 01:48:07.140562  334613 status.go:343] host is not running, skipping remaining checks
	I0229 01:48:07.140570  334613 status.go:257] multinode-675288 status: &{Name:multinode-675288 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:48:07.140611  334613 status.go:255] checking status of multinode-675288-m02 ...
	I0229 01:48:07.140935  334613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 01:48:07.140976  334613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:48:07.155392  334613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0229 01:48:07.155732  334613 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:48:07.156163  334613 main.go:141] libmachine: Using API Version  1
	I0229 01:48:07.156191  334613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:48:07.156485  334613 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:48:07.156660  334613 main.go:141] libmachine: (multinode-675288-m02) Calling .GetState
	I0229 01:48:07.158054  334613 status.go:330] multinode-675288-m02 host status = "Stopped" (err=<nil>)
	I0229 01:48:07.158065  334613 status.go:343] host is not running, skipping remaining checks
	I0229 01:48:07.158070  334613 status.go:257] multinode-675288-m02 status: &{Name:multinode-675288-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-675288 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0229 01:49:28.596788  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-675288 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m27.67888844s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-675288 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-675288
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-675288-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-675288-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (74.702262ms)

                                                
                                                
-- stdout --
	* [multinode-675288-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-675288-m02' is duplicated with machine name 'multinode-675288-m02' in profile 'multinode-675288'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-675288-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-675288-m03 --driver=kvm2  --container-runtime=containerd: (47.747052744s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-675288
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-675288: exit status 80 (247.552319ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-675288
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-675288-m03 already exists in multinode-675288-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-675288-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.92s)

                                                
                                    
x
+
TestPreload (294.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-880182 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0229 01:51:14.620967  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:52:31.644834  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-880182 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m12.198360605s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-880182 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-880182 image pull gcr.io/k8s-minikube/busybox: (2.684748517s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-880182
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-880182: (1m31.524369159s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-880182 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0229 01:54:28.597298  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-880182 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m7.309587437s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-880182 image list
helpers_test.go:175: Cleaning up "test-preload-880182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-880182
--- PASS: TestPreload (294.83s)

                                                
                                    
x
+
TestScheduledStopUnix (120.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-426646 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-426646 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.608859007s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426646 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-426646 -n scheduled-stop-426646
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426646 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426646 --cancel-scheduled
E0229 01:56:14.621379  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-426646 -n scheduled-stop-426646
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-426646
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426646 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-426646
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-426646: exit status 7 (76.258287ms)

                                                
                                                
-- stdout --
	scheduled-stop-426646
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-426646 -n scheduled-stop-426646
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-426646 -n scheduled-stop-426646: exit status 7 (82.020332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-426646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-426646
--- PASS: TestScheduledStopUnix (120.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (233.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2073822576 start -p running-upgrade-538413 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2073822576 start -p running-upgrade-538413 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m14.934739598s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-538413 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-538413 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m34.534912239s)
helpers_test.go:175: Cleaning up "running-upgrade-538413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-538413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-538413: (1.254078982s)
--- PASS: TestRunningBinaryUpgrade (233.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-493829 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-493829 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (98.665052ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-493829] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-493829 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-493829 --driver=kvm2  --container-runtime=containerd: (1m40.430330084s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-493829 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (56.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-493829 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-493829 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (54.953870455s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-493829 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-493829 status -o json: exit status 2 (274.71106ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-493829","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-493829
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-493829: (1.061581203s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (56.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-493829 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-493829 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (35.99572468s)
--- PASS: TestNoKubernetes/serial/Start (36.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-493829 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-493829 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.497264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.533702671s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.277748906s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-493829
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-493829: (2.682782111s)
--- PASS: TestNoKubernetes/serial/Stop (2.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (24.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-493829 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-493829 --driver=kvm2  --container-runtime=containerd: (24.139007065s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (24.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (179.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2723367402 start -p stopped-upgrade-260303 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2723367402 start -p stopped-upgrade-260303 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m29.375430394s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2723367402 -p stopped-upgrade-260303 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2723367402 -p stopped-upgrade-260303 stop: (2.419802137s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-260303 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-260303 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m27.210146356s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (179.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-493829 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-493829 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.317512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-704272 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-704272 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (117.066456ms)

                                                
                                                
-- stdout --
	* [false-704272] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:01:34.508398  341915 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:01:34.508800  341915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:01:34.508850  341915 out.go:304] Setting ErrFile to fd 2...
	I0229 02:01:34.508869  341915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:01:34.509314  341915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
	I0229 02:01:34.510466  341915 out.go:298] Setting JSON to false
	I0229 02:01:34.511556  341915 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6239,"bootTime":1709165856,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:01:34.511631  341915 start.go:139] virtualization: kvm guest
	I0229 02:01:34.513615  341915 out.go:177] * [false-704272] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:01:34.515212  341915 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:01:34.516469  341915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:01:34.515282  341915 notify.go:220] Checking for updates...
	I0229 02:01:34.518904  341915 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
	I0229 02:01:34.520279  341915 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
	I0229 02:01:34.521524  341915 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:01:34.522711  341915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:01:34.524311  341915 config.go:182] Loaded profile config "cert-expiration-113971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 02:01:34.524422  341915 config.go:182] Loaded profile config "kubernetes-upgrade-335938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 02:01:34.524516  341915 config.go:182] Loaded profile config "stopped-upgrade-260303": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0229 02:01:34.524642  341915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:01:34.561250  341915 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:01:34.562508  341915 start.go:299] selected driver: kvm2
	I0229 02:01:34.562523  341915 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:01:34.562534  341915 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:01:34.564347  341915 out.go:177] 
	W0229 02:01:34.565521  341915 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0229 02:01:34.566703  341915 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-704272 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-704272" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 02:00:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.251:8443
name: cert-expiration-113971
contexts:
- context:
cluster: cert-expiration-113971
extensions:
- extension:
last-update: Thu, 29 Feb 2024 02:00:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-113971
name: cert-expiration-113971
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-113971
user:
client-certificate: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/cert-expiration-113971/client.crt
client-key: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/cert-expiration-113971/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-704272

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-704272"

                                                
                                                
----------------------- debugLogs end: false-704272 [took: 3.24537288s] --------------------------------
helpers_test.go:175: Cleaning up "false-704272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-704272
--- PASS: TestNetworkPlugins/group/false (3.53s)

                                                
                                    
x
+
TestPause/serial/Start (141.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-695654 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-695654 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m21.488905579s)
--- PASS: TestPause/serial/Start (141.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m39.390794235s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-695654 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-695654 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (7.172892922s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-695654 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-695654 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-695654 --output=json --layout=cluster: exit status 2 (252.416136ms)

                                                
                                                
-- stdout --
	{"Name":"pause-695654","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-695654","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-695654 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-695654 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-695654 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-695654 --alsologtostderr -v=5: (1.013956057s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m33.938640528s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-260303
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-260303: (1.134002536s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0229 02:04:28.597280  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m39.731974514s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-704272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-704272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2bdjh" [34fada06-9712-4c35-b981-d43720a7d0df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2bdjh" [34fada06-9712-4c35-b981-d43720a7d0df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005560845s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-704272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m9.173993242s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d8r24" [b839b156-fafb-405c-9b61-46c124e5d289] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007133066s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-704272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-704272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gd9zv" [8f9ef560-c95e-4d56-9d73-e01e46fbaa28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gd9zv" [8f9ef560-c95e-4d56-9d73-e01e46fbaa28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004657093s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-704272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-704272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-704272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g7ctq" [eec4140a-58f1-4d0c-8822-86db73caa425] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g7ctq" [eec4140a-58f1-4d0c-8822-86db73caa425] Running
E0229 02:06:14.620634  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.006952217s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-704272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m33.936588907s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (132.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m12.502390662s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (132.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tznrk" [fe17dd9b-5402-4ab5-a9c0-416309a647de] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005483136s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-704272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-704272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bvzdt" [227684c4-0c90-4838-9e16-f07306ec7d26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bvzdt" [227684c4-0c90-4838-9e16-f07306ec7d26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004935204s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-704272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (112.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-704272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m52.130677917s)
--- PASS: TestNetworkPlugins/group/bridge/Start (112.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-p9bgx" [ab079111-6f93-4c0f-bd65-d6ae1cf5797d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.009446775s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-704272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-704272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h9jkz" [0a1be1a0-3b9d-4999-8cdc-373e15196c8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h9jkz" [0a1be1a0-3b9d-4999-8cdc-373e15196c8d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004701559s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-704272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (139.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-907398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-907398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (2m19.705159574s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (139.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-704272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-704272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8t892" [87de3bde-ac14-4369-a0be-7a92e36f964f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8t892" [87de3bde-ac14-4369-a0be-7a92e36f964f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004030189s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-704272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-254367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-254367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m41.400719415s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-704272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-704272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cxvvz" [907f056c-560e-463d-bd5e-571e9dbdfc08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cxvvz" [907f056c-560e-463d-bd5e-571e9dbdfc08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00501338s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-704272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-704272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0229 02:18:25.654439  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268307 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0229 02:10:17.835612  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:17.840932  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:17.851379  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:17.871699  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:17.912009  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:17.992464  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:18.152992  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:18.473420  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:19.113557  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:20.394596  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:22.955867  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:28.076408  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:10:38.317292  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268307 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m0.556140927s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.157529544s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-268307 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-268307 --alsologtostderr -v=3: (2.11262903s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268307 -n newest-cni-268307
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268307 -n newest-cni-268307: exit status 7 (88.324206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-268307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268307 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0229 02:10:48.921675  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:48.926947  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:48.937192  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:48.957483  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:48.997724  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:49.078095  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:49.238633  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:49.558946  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:50.199213  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:10:51.480255  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268307 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (42.667406997s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268307 -n newest-cni-268307
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-254367 create -f testdata/busybox.yaml
E0229 02:10:54.040419  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e67cc2c-b95f-4d41-8074-2193a276ce9c] Pending
helpers_test.go:344: "busybox" [9e67cc2c-b95f-4d41-8074-2193a276ce9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e67cc2c-b95f-4d41-8074-2193a276ce9c] Running
E0229 02:10:59.161334  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004381335s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-254367 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-907398 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bbc36a18-e210-4af3-9b08-c64311fcafbd] Pending
helpers_test.go:344: "busybox" [bbc36a18-e210-4af3-9b08-c64311fcafbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0229 02:10:58.798479  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
helpers_test.go:344: "busybox" [bbc36a18-e210-4af3-9b08-c64311fcafbd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004181665s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-907398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-907398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-907398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05808173s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-907398 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-254367 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-254367 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.218603478s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-254367 describe deploy/metrics-server -n kube-system
E0229 02:11:06.526527  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-907398 --alsologtostderr -v=3
E0229 02:11:05.889354  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:05.894696  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:05.905035  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:05.925342  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:05.965682  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:06.046193  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:06.206376  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-907398 --alsologtostderr -v=3: (1m31.833996657s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-254367 --alsologtostderr -v=3
E0229 02:11:07.167422  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:08.447607  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:09.401924  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:11:11.007785  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:14.620809  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 02:11:16.128028  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:26.368806  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:29.882810  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-254367 --alsologtostderr -v=3: (1m32.262976948s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-268307 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-268307 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268307 -n newest-cni-268307
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268307 -n newest-cni-268307: exit status 2 (258.663123ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268307 -n newest-cni-268307
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268307 -n newest-cni-268307: exit status 2 (249.677974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-268307 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268307 -n newest-cni-268307
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268307 -n newest-cni-268307
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-665766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0229 02:11:39.759627  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:11:46.849406  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:11:52.589938  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:52.595214  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:52.605427  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:52.625655  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:52.665897  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:52.746895  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:52.907289  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:53.227910  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:53.868415  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:55.148864  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:11:57.709989  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:12:02.831150  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:12:10.843911  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:12:13.071623  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:12:27.810152  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:12:33.552455  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-665766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m41.318160174s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-907398 -n no-preload-907398
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-907398 -n no-preload-907398: exit status 7 (87.772149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-907398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (324.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-907398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-907398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m24.456593707s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-907398 -n no-preload-907398
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (324.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367: exit status 7 (86.929457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-254367 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-254367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0229 02:12:57.968909  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:57.974174  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:57.984457  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:58.005186  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:58.045495  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:58.125886  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:58.286699  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:58.607679  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:12:59.247999  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:13:00.529118  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:13:01.680548  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:13:03.089492  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:13:08.210371  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:13:14.513617  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-254367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m44.529908625s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-665766 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cf30eefd-cde2-4695-a575-110281d3e2dc] Pending
helpers_test.go:344: "busybox" [cf30eefd-cde2-4695-a575-110281d3e2dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0229 02:13:18.451114  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cf30eefd-cde2-4695-a575-110281d3e2dc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004685867s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-665766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-665766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-665766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.125205367s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-665766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-665766 --alsologtostderr -v=3
E0229 02:13:32.765016  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:13:38.931903  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:13:46.416220  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:46.421488  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:46.431737  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:46.451984  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:46.492268  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:46.572583  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:46.732890  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:47.053891  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:47.694368  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:48.975130  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:49.730326  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:13:51.536307  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:13:56.657349  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-665766 --alsologtostderr -v=3: (1m32.264320487s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-254968 --alsologtostderr -v=3
E0229 02:14:06.897656  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-254968 --alsologtostderr -v=3: (1.358495138s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-254968 -n old-k8s-version-254968: exit status 7 (75.020872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-254968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-665766 -n embed-certs-665766
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-665766 -n embed-certs-665766: exit status 7 (84.429776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-665766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (601.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-665766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0229 02:15:08.338762  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:15:17.835048  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:15:40.451395  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:15:41.813350  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
E0229 02:15:45.521616  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/auto-704272/client.crt: no such file or directory
E0229 02:15:48.921746  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:15:57.674957  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 02:16:05.889511  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:16:14.621058  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 02:16:16.606124  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/calico-704272/client.crt: no such file or directory
E0229 02:16:30.259792  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/enable-default-cni-704272/client.crt: no such file or directory
E0229 02:16:33.571114  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/custom-flannel-704272/client.crt: no such file or directory
E0229 02:16:52.589687  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:17:02.371639  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/bridge-704272/client.crt: no such file or directory
E0229 02:17:20.275147  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/kindnet-704272/client.crt: no such file or directory
E0229 02:17:57.969577  316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/flannel-704272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-665766 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (10m0.80054189s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-665766 -n embed-certs-665766
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (601.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xsmlj" [250773e5-d8ea-4336-9734-ed8be7b48b76] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xsmlj" [250773e5-d8ea-4336-9734-ed8be7b48b76] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.007125245s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xsmlj" [250773e5-d8ea-4336-9734-ed8be7b48b76] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005311868s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-907398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-907398 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-907398 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-907398 --alsologtostderr -v=1: (1.024565369s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-907398 -n no-preload-907398
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-907398 -n no-preload-907398: exit status 2 (288.697977ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-907398 -n no-preload-907398
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-907398 -n no-preload-907398: exit status 2 (267.563799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-907398 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-907398 -n no-preload-907398
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-907398 -n no-preload-907398
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wzhbg" [e5bee8bf-8084-48b8-a022-c4a60d9afaa9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wzhbg" [e5bee8bf-8084-48b8-a022-c4a60d9afaa9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005482785s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wzhbg" [e5bee8bf-8084-48b8-a022-c4a60d9afaa9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004436118s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-254367 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-254367 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-254367 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367: exit status 2 (255.972603ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367: exit status 2 (267.224154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-254367 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-254367 -n default-k8s-diff-port-254367
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nb9dw" [a48d2aae-e3db-421d-bb89-65ae6ff41128] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007548993s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nb9dw" [a48d2aae-e3db-421d-bb89-65ae6ff41128] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004982008s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-665766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-665766 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-665766 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-665766 -n embed-certs-665766
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-665766 -n embed-certs-665766: exit status 2 (249.486905ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-665766 -n embed-certs-665766
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-665766 -n embed-certs-665766: exit status 2 (252.033892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-665766 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-665766 -n embed-certs-665766
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-665766 -n embed-certs-665766
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                    

Test skip (39/316)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
149 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
150 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
151 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.41
265 TestNetworkPlugins/group/cilium 3.96
271 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-704272 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-704272" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 02:00:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.251:8443
name: cert-expiration-113971
contexts:
- context:
cluster: cert-expiration-113971
extensions:
- extension:
last-update: Thu, 29 Feb 2024 02:00:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-113971
name: cert-expiration-113971
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-113971
user:
client-certificate: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/cert-expiration-113971/client.crt
client-key: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/cert-expiration-113971/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-704272

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-704272"

                                                
                                                
----------------------- debugLogs end: kubenet-704272 [took: 3.253498214s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-704272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-704272
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-704272 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-704272" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 02:00:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.251:8443
name: cert-expiration-113971
contexts:
- context:
cluster: cert-expiration-113971
extensions:
- extension:
last-update: Thu, 29 Feb 2024 02:00:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-113971
name: cert-expiration-113971
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-113971
user:
client-certificate: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/cert-expiration-113971/client.crt
client-key: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/cert-expiration-113971/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-704272

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-704272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704272"

                                                
                                                
----------------------- debugLogs end: cilium-704272 [took: 3.804480261s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-704272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-704272
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-276073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-276073
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard